| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openshift-monitoring |
thanos-querier-cc996c4bd-j4hzr |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-cc996c4bd-j4hzr to master-0 | ||
openshift-marketplace |
marketplace-operator-7d67745bb7-dwcxb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
redhat-operators-6z4sc |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-6z4sc to master-0 | ||
openshift-marketplace |
redhat-operators-6rjqz |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-6rjqz to master-0 | ||
openshift-marketplace |
marketplace-operator-7d67745bb7-dwcxb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-dns |
node-resolver-4xlhs |
Scheduled |
Successfully assigned openshift-dns/node-resolver-4xlhs to master-0 | ||
openshift-marketplace |
community-operators-7fwtv |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-7fwtv to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-marketplace |
community-operators-582c5 |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-582c5 to master-0 | ||
openshift-marketplace |
certified-operators-t8rt7 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-t8rt7 to master-0 | ||
openshift-etcd-operator |
etcd-operator-7978bf889c-n64v4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-etcd-operator |
etcd-operator-7978bf889c-n64v4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-etcd-operator |
etcd-operator-7978bf889c-n64v4 |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-7978bf889c-n64v4 to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n to master-0 | ||
openshift-authentication |
oauth-openshift-747bdb58b5-mn76f |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-747bdb58b5-mn76f |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-747bdb58b5-mn76f |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-747bdb58b5-mn76f to master-0 | ||
openshift-machine-api |
machine-api-operator-7486ff55f-wcnxg |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-7486ff55f-wcnxg to master-0 | ||
openshift-operator-lifecycle-manager |
packageserver-7c64dd9d8b-49skr |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr to master-0 | ||
openshift-authentication |
oauth-openshift-79f7f4d988-pxd4d |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-79f7f4d988-pxd4d to master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-5fdc576499-j2n8j |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j to master-0 | ||
openshift-console-operator |
console-operator-77df56447c-vsrxx |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-77df56447c-vsrxx to master-0 | ||
openshift-ingress-operator |
ingress-operator-85dbd94574-8jfp5 |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5 to master-0 | ||
openshift-ingress-operator |
ingress-operator-85dbd94574-8jfp5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress-operator |
ingress-operator-85dbd94574-8jfp5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-api |
cluster-autoscaler-operator-7f88444875-6dk29 |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29 to master-0 | ||
openshift-kube-scheduler-operator |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-scheduler-operator |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-authentication-operator |
authentication-operator-7479ffdf48-hpdzl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-authentication-operator |
authentication-operator-7479ffdf48-hpdzl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-authentication-operator |
authentication-operator-7479ffdf48-hpdzl |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl to master-0 | ||
openshift-kube-scheduler-operator |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9 to master-0 | ||
openshift-operator-lifecycle-manager |
package-server-manager-75b4d49d4c-h599p |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p to master-0 | ||
openshift-operator-lifecycle-manager |
package-server-manager-75b4d49d4c-h599p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
package-server-manager-75b4d49d4c-h599p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
marketplace-operator-7d67745bb7-dwcxb |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb to master-0 | ||
openshift-marketplace |
redhat-marketplace-mtm6s |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-mtm6s to master-0 | ||
openshift-machine-config-operator |
machine-config-server-pvrfs |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-pvrfs to master-0 | ||
openshift-apiserver-operator |
openshift-apiserver-operator-667484ff5-n7qz8 |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8 to master-0 | ||
openshift-console |
downloads-6f5db8559b-96ljh |
Scheduled |
Successfully assigned openshift-console/downloads-6f5db8559b-96ljh to master-0 | ||
openshift-kube-storage-version-migrator-operator |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz to master-0 | ||
openshift-apiserver-operator |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-storage-version-migrator-operator |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-storage-version-migrator-operator |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-apiserver-operator |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
kube-state-metrics-7dcc7f9bd6-68wml |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml to master-0 | ||
openshift-kube-storage-version-migrator |
migrator-5bcf58cf9c-dvklg |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg to master-0 | ||
openshift-apiserver |
apiserver-7c895b7864-fxr2k |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-7c895b7864-fxr2k to master-0 | ||
openshift-machine-config-operator |
machine-config-controller-74cddd4fb5-phk6r |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r to master-0 | ||
openshift-ovn-kubernetes |
ovnkube-node-txl6b |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-txl6b to master-0 | ||
openshift-ingress-canary |
ingress-canary-vkpv4 |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-vkpv4 to master-0 | ||
openshift-machine-config-operator |
machine-config-daemon-2ztl9 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-2ztl9 to master-0 | ||
openshift-ingress |
router-default-54f97f57-rr9px |
Scheduled |
Successfully assigned openshift-ingress/router-default-54f97f57-rr9px to master-0 | ||
openshift-ingress |
router-default-54f97f57-rr9px |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-54f97f57-rr9px |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-54f97f57-rr9px |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-54f97f57-rr9px |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-console |
console-c5d7cd7f9-2hp75 |
Scheduled |
Successfully assigned openshift-console/console-c5d7cd7f9-2hp75 to master-0 | ||
openshift-monitoring |
metrics-server-555496955b-vpcbs |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-555496955b-vpcbs to master-0 | ||
openshift-monitoring |
monitoring-plugin-547cc9cc49-kqs4k |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k to master-0 | ||
openshift-monitoring |
node-exporter-b62gf |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-b62gf to master-0 | ||
openshift-operator-lifecycle-manager |
olm-operator-76bd5d69c7-fjrrg |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg to master-0 | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
redhat-marketplace-ddwmn |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-ddwmn to master-0 | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
openshift-state-metrics-57cbc648f8-q4cgg |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg to master-0 | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-5b557b5f57-s5s96 |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96 to master-0 | ||
openshift-image-registry |
node-ca-4p4zh |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-4p4zh to master-0 | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p to master-0 | ||
openshift-console |
console-648d88c756-vswh8 |
Scheduled |
Successfully assigned openshift-console/console-648d88c756-vswh8 to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-29412840-nfbpl |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl to master-0 | ||
openshift-catalogd |
catalogd-controller-manager-754cfd84-qf898 |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-754cfd84-qf898 to master-0 | ||
openshift-service-ca-operator |
service-ca-operator-56f5898f45-fhnc5 |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5 to master-0 | ||
openshift-service-ca-operator |
service-ca-operator-56f5898f45-fhnc5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-service-ca-operator |
service-ca-operator-56f5898f45-fhnc5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
catalog-operator-7cf5cf757f-zgm6l |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l to master-0 | ||
openshift-operator-controller |
operator-controller-controller-manager-5f78c89466-bshxw |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw to master-0 | ||
openshift-config-operator |
openshift-config-operator-68c95b6cf5-fmdmz |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz to master-0 | ||
openshift-monitoring |
prometheus-operator-565bdcb8-477pk |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-565bdcb8-477pk to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq to master-0 | ||
openshift-image-registry |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-config-operator |
machine-config-operator-664c9d94c9-9vfr4 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4 to master-0 | ||
openshift-controller-manager |
controller-manager-569cbcf7fb-99r5f |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-image-registry |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-service-ca |
service-ca-6b8bb995f7-b68p8 |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-6b8bb995f7-b68p8 to master-0 | ||
openshift-controller-manager |
controller-manager-7d8fb964c9-v2h98 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7d8fb964c9-v2h98 to master-0 | ||
openshift-controller-manager |
controller-manager-7d8fb964c9-v2h98 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-image-registry |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz to master-0 | ||
openshift-monitoring |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-oauth-apiserver |
apiserver-57fd58bc7b-kktql |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql to master-0 | ||
openshift-apiserver |
apiserver-6985f84b49-v9vlg |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6985f84b49-v9vlg to master-0 | ||
openshift-apiserver |
apiserver-6985f84b49-v9vlg |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh to master-0 | ||
openshift-monitoring |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2 to master-0 | ||
openshift-cloud-credential-operator |
cloud-credential-operator-7c4dc67499-tjwg8 |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8 to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-5dbcf69784-65p95 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-5dbcf69784-65p95 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-66bd7f46c9-p8fcq |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-66bd7f46c9-p8fcq |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq to master-0 | ||
openshift-cluster-version |
cluster-version-operator-7c49fbfc6f-7krqx |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-machine-approver |
machine-approver-5775bfbf6d-vtvbd |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd to master-0 | ||
openshift-controller-manager |
controller-manager-78d987764b-xcs5w |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-78d987764b-xcs5w to master-0 | ||
openshift-controller-manager |
controller-manager-78d987764b-xcs5w |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-machine-approver |
machine-approver-cb84b9cdf-qn94w |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w to master-0 | ||
openshift-network-operator |
iptables-alerter-n24qb |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-n24qb to master-0 | ||
openshift-network-node-identity |
network-node-identity-c8csx |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-c8csx to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-678c7f799b-4b7nv |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-678c7f799b-4b7nv |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-678c7f799b-4b7nv |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv to master-0 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-route-controller-manager |
route-controller-manager-6fcd4b8856-ztns6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-6fcd4b8856-ztns6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-6fcd4b8856-ztns6 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6 to master-0 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm to master-0 | ||
openshift-network-diagnostics |
network-check-target-pcchm |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-pcchm to master-0 | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74 to master-0 | ||
openshift-network-diagnostics |
network-check-source-6964bb78b7-g4lv2 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2 to master-0 | ||
openshift-network-diagnostics |
network-check-source-6964bb78b7-g4lv2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-6964bb78b7-g4lv2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-route-controller-manager |
route-controller-manager-75c7768d99-klvvl |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-network-diagnostics |
network-check-source-6964bb78b7-g4lv2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-node-tuning-operator |
tuned-7zkbg |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-7zkbg to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-7787465f55-49pjz |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-network-diagnostics |
network-check-source-6964bb78b7-g4lv2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-route-controller-manager |
route-controller-manager-7f6f54d5f6-ch42s |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-7f6f54d5f6-ch42s |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-7f6f54d5f6-ch42s |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s to master-0 | ||
openshift-cluster-olm-operator |
cluster-olm-operator-589f5cdc9d-5h2kn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-olm-operator |
cluster-olm-operator-589f5cdc9d-5h2kn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-olm-operator |
cluster-olm-operator-589f5cdc9d-5h2kn |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-7ffbbcd969-mkclq |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq to master-0 | ||
openshift-controller-manager |
controller-manager-6fb5f97c4d-bcdbq |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq to master-0 | ||
openshift-controller-manager |
controller-manager-6fb5f97c4d-bcdbq |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-84f75d5446-j8tkx |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-84f75d5446-j8tkx |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-84f75d5446-j8tkx |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx to master-0 | ||
openshift-dns-operator |
dns-operator-6b7bcd6566-jh9m8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-dns-operator |
dns-operator-6b7bcd6566-jh9m8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-dns-operator |
dns-operator-6b7bcd6566-jh9m8 |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8 to master-0 | ||
openshift-controller-manager |
controller-manager-6c79f444f7-8rmss |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6c79f444f7-8rmss to master-0 | ||
openshift-dns |
dns-default-5m4f8 |
Scheduled |
Successfully assigned openshift-dns/dns-default-5m4f8 to master-0 | ||
openshift-controller-manager |
controller-manager-6c79f444f7-8rmss |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-samples-operator |
cluster-samples-operator-6d64b47964-jjd7h |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h to master-0 | ||
openshift-multus |
multus-admission-controller-78ddcf56f9-8l84w |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-78ddcf56f9-8l84w to master-0 | ||
openshift-multus |
multus-admission-controller-78ddcf56f9-8l84w |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-multus |
multus-admission-controller-78ddcf56f9-8l84w |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-controller-manager |
controller-manager-6c4bfbb4d5-77st9 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9 to master-0 | ||
openshift-controller-manager |
controller-manager-6c4bfbb4d5-77st9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-6c4bfbb4d5-77st9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-688676d587-z9qm2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-multus |
multus-admission-controller-5bdcc987c4-x99xc |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5bdcc987c4-x99xc to master-0 | ||
openshift-controller-manager |
controller-manager-688676d587-z9qm2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-insights |
insights-operator-59d99f9b7b-74sss |
Scheduled |
Successfully assigned openshift-insights/insights-operator-59d99f9b7b-74sss to master-0 | ||
openshift-cluster-storage-operator |
cluster-storage-operator-f84784664-ntb9w |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w to master-0 | ||
openshift-controller-manager |
controller-manager-5c8b4c9687-4pxw5 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5 to master-0 | ||
openshift-controller-manager |
controller-manager-5c8b4c9687-4pxw5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-86897dd478-qqwh7 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7 to master-0 | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-7b795784b8-44frm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-7b795784b8-44frm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 to master-0 | ||
openshift-controller-manager |
controller-manager-56fb5cd58b-5hnj2 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2 to master-0 | ||
openshift-controller-manager |
controller-manager-595fbb95f7-nqxs8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-7b795784b8-44frm |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm to master-0 | ||
kube-system |
Required control plane pods have been created | ||||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_1d7f749c-e20e-4358-b2eb-2240be7b9281 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_5148b60d-9324-4f2a-86ce-6ba1947a6676 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_28711416-0e53-481c-8d11-87e52e838181 became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_f0bd62aa-8ffb-41b6-add4-ebce59def88d became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
| (x2) | assisted-installer |
job-controller |
assisted-installer-controller |
FailedCreate |
Error creating: pods "assisted-installer-controller-" is forbidden: error looking up service account assisted-installer/assisted-installer-controller: serviceaccount "assisted-installer-controller" not found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-stq5g | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-869c786959 to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_34053b74-5a19-4d42-b1e9-ecd14aa8b608 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-589f5cdc9d to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
| (x13) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-869c786959 |
FailedCreate |
Error creating: pods "cluster-version-operator-869c786959-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-6cbf58c977 to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-56f5898f45 to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-5f574c6c79 to 1 | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-b5dddf8f5 to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-667484ff5 to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-7c4697b5f5 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-7d67745bb7 to 1 | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-7978bf889c to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-6b7bcd6566 to 1 | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-7479ffdf48 to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-67c4cff67d to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
| (x12) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-589f5cdc9d |
FailedCreate |
Error creating: pods "cluster-olm-operator-589f5cdc9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-network-operator |
replicaset-controller |
network-operator-6cbf58c977 |
FailedCreate |
Error creating: pods "network-operator-6cbf58c977-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-b5dddf8f5 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-b5dddf8f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-56f5898f45 |
FailedCreate |
Error creating: pods "service-ca-operator-56f5898f45-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-5f574c6c79 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-5f574c6c79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-marketplace |
replicaset-controller |
marketplace-operator-7d67745bb7 |
FailedCreate |
Error creating: pods "marketplace-operator-7d67745bb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-dns-operator |
replicaset-controller |
dns-operator-6b7bcd6566 |
FailedCreate |
Error creating: pods "dns-operator-6b7bcd6566-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-667484ff5 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-667484ff5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-7978bf889c |
FailedCreate |
Error creating: pods "etcd-operator-7978bf889c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-7c4697b5f5 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-7c4697b5f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-7b795784b8 to 1 | |
| (x12) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-67c4cff67d |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-67c4cff67d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-7479ffdf48 |
FailedCreate |
Error creating: pods "authentication-operator-7479ffdf48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-69cc794c58 to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-bbd9b9dff to 1 | |
| (x9) | assisted-installer |
default-scheduler |
assisted-installer-controller-stq5g |
FailedScheduling |
no nodes available to schedule pods |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-5b557b5f57 to 1 | |
| (x9) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bbd9b9dff |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-bbd9b9dff-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b795784b8 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7b795784b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-75b4d49d4c to 1 | |
| (x9) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-69cc794c58 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-69cc794c58-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-75b4d49d4c |
FailedCreate |
Error creating: pods "package-server-manager-75b4d49d4c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-65dc4bcb88 to 1 | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
| (x8) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-65dc4bcb88 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-65dc4bcb88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
| (x7) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-5b557b5f57 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-5b557b5f57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-85dbd94574 to 1 | |
| (x6) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-85dbd94574 |
FailedCreate |
Error creating: pods "ingress-operator-85dbd94574-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
Required control plane pods have been created | ||||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_064adf2a-26b2-4386-b962-06865b9f2769 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_550412e6-9edf-4337-954e-5fb4994ef272 became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_b2c3f931-5568-4d1e-a8e9-cc8be654a235 became leader | |
| (x6) | assisted-installer |
default-scheduler |
assisted-installer-controller-stq5g |
FailedScheduling |
no nodes available to schedule pods |
| (x10) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-589f5cdc9d |
FailedCreate |
Error creating: pods "cluster-olm-operator-589f5cdc9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-667484ff5 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-667484ff5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b795784b8 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7b795784b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bbd9b9dff |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-bbd9b9dff-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-7479ffdf48 |
FailedCreate |
Error creating: pods "authentication-operator-7479ffdf48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-dns-operator |
replicaset-controller |
dns-operator-6b7bcd6566 |
FailedCreate |
Error creating: pods "dns-operator-6b7bcd6566-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-7c4697b5f5 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-7c4697b5f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-67c4cff67d |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-67c4cff67d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-85dbd94574 |
FailedCreate |
Error creating: pods "ingress-operator-85dbd94574-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-7978bf889c |
FailedCreate |
Error creating: pods "etcd-operator-7978bf889c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-b5dddf8f5 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-b5dddf8f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-marketplace |
replicaset-controller |
marketplace-operator-7d67745bb7 |
FailedCreate |
Error creating: pods "marketplace-operator-7d67745bb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-65dc4bcb88 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-65dc4bcb88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-5b557b5f57 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-5b557b5f57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-5f574c6c79 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-5f574c6c79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-869c786959 |
FailedCreate |
Error creating: pods "cluster-version-operator-869c786959-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-69cc794c58 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-69cc794c58-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-56f5898f45 |
FailedCreate |
Error creating: pods "service-ca-operator-56f5898f45-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-network-operator |
replicaset-controller |
network-operator-6cbf58c977 |
FailedCreate |
Error creating: pods "network-operator-6cbf58c977-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-75b4d49d4c |
FailedCreate |
Error creating: pods "package-server-manager-75b4d49d4c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bbd9b9dff |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-bbd9b9dff-rrfsm | |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-589f5cdc9d |
SuccessfulCreate |
Created pod: cluster-olm-operator-589f5cdc9d-5h2kn | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-7479ffdf48 |
SuccessfulCreate |
Created pod: authentication-operator-7479ffdf48-hpdzl | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-7479ffdf48-hpdzl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-667484ff5 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-667484ff5-n7qz8 | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-7978bf889c |
SuccessfulCreate |
Created pod: etcd-operator-7978bf889c-n64v4 | |
openshift-dns-operator |
replicaset-controller |
dns-operator-6b7bcd6566 |
SuccessfulCreate |
Created pod: dns-operator-6b7bcd6566-jh9m8 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-589f5cdc9d-5h2kn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b795784b8 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-7b795784b8-44frm | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-869c786959 |
SuccessfulCreate |
Created pod: cluster-version-operator-869c786959-vrvwt | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-67c4cff67d |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-67c4cff67d-q2lxz | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-7c4697b5f5 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-7c4697b5f5-9f69p | |
openshift-dns-operator |
default-scheduler |
dns-operator-6b7bcd6566-jh9m8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7b795784b8-44frm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-65dc4bcb88 |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-65dc4bcb88-96zcz | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-869c786959-vrvwt |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-869c786959-vrvwt to master-0 | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-b5dddf8f5 |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-b5dddf8f5-kwb74 | |
openshift-network-operator |
default-scheduler |
network-operator-6cbf58c977-8lh6n |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-6cbf58c977-8lh6n to master-0 | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-5f574c6c79 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-5f574c6c79-86bh9 | |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5) |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-56f5898f45 |
SuccessfulCreate |
Created pod: service-ca-operator-56f5898f45-fhnc5 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-75b4d49d4c-h599p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-operator |
replicaset-controller |
network-operator-6cbf58c977 |
SuccessfulCreate |
Created pod: network-operator-6cbf58c977-8lh6n | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-7d67745bb7 |
SuccessfulCreate |
Created pod: marketplace-operator-7d67745bb7-dwcxb | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-85dbd94574-8jfp5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-85dbd94574 |
SuccessfulCreate |
Created pod: ingress-operator-85dbd94574-8jfp5 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-7978bf889c-n64v4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-marketplace |
default-scheduler |
marketplace-operator-7d67745bb7-dwcxb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-56f5898f45-fhnc5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-5b557b5f57 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-5b557b5f57-s5s96 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-75b4d49d4c |
SuccessfulCreate |
Created pod: package-server-manager-75b4d49d4c-h599p | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-69cc794c58 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-69cc794c58-mfjk2 | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" | |
assisted-installer |
default-scheduler |
assisted-installer-controller-stq5g |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-stq5g to master-0 | |
assisted-installer |
kubelet |
assisted-installer-controller-stq5g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4" | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" in 7.151s (7.151s including waiting). Image size: 616123373 bytes. | |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Failed |
Error: services have not yet been read at least once, cannot construct envvars |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine |
| (x2) | assisted-installer |
kubelet |
assisted-installer-controller-stq5g |
Failed |
Error: services have not yet been read at least once, cannot construct envvars |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
assisted-installer |
kubelet |
assisted-installer-controller-stq5g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4" in 4.866s (4.866s including waiting). Image size: 682385666 bytes. | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Started |
Started container network-operator | |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Created |
Created container: network-operator | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_64a89760-15d2-42ce-ba18-6d27709b98d7 became leader | |
assisted-installer |
kubelet |
assisted-installer-controller-stq5g |
Created |
Created container: assisted-installer-controller | |
| (x2) | assisted-installer |
kubelet |
assisted-installer-controller-stq5g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4" already present on machine |
assisted-installer |
kubelet |
assisted-installer-controller-stq5g |
Started |
Started container assisted-installer-controller | |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-jqvnb | |
openshift-network-operator |
default-scheduler |
mtu-prober-jqvnb |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-jqvnb to master-0 | |
openshift-network-operator |
kubelet |
mtu-prober-jqvnb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-network-operator |
kubelet |
mtu-prober-jqvnb |
Created |
Created container: prober | |
openshift-network-operator |
kubelet |
mtu-prober-jqvnb |
Started |
Started container prober | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
kubelet |
multus-kk4tm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-ch7xd | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-42hmk |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-42hmk to master-0 | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-42hmk | |
openshift-multus |
default-scheduler |
network-metrics-daemon-ch7xd |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-ch7xd to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceaa4102b35e54be54e23c8ea73bb0dac4978cffb54105ad00b51393f47595da" | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-kk4tm | |
openshift-multus |
default-scheduler |
multus-kk4tm |
Scheduled |
Successfully assigned openshift-multus/multus-kk4tm to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-78ddcf56f9-8l84w |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-78ddcf56f9 |
SuccessfulCreate |
Created pod: multus-admission-controller-78ddcf56f9-8l84w | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-78ddcf56f9 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceaa4102b35e54be54e23c8ea73bb0dac4978cffb54105ad00b51393f47595da" in 2.946s (2.946s including waiting). Image size: 532338751 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: egress-router-binary-copy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-f9f7f4946 to 1 | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-f9f7f4946 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-f9f7f4946-48mrg | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-m5stk | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-f9f7f4946-48mrg |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg to master-0 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-m5stk |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-m5stk to master-0 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda" in 11.71s (11.71s including waiting). Image size: 677540255 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" | |
openshift-multus |
kubelet |
multus-kk4tm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" in 15.751s (15.751s including waiting). Image size: 1232076476 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926" | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a) |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926" in 17.687s (17.687s including waiting). Image size: 406067436 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: bond-cni-plugin | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" in 28.844s (28.844s including waiting). Image size: 1631769045 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Started |
Started container ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Created |
Created container: ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-f9f7f4946-48mrg became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" in 18.399s (18.399s including waiting). Image size: 1631769045 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f" in 1.106s (1.106s including waiting). Image size: 401824348 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m5stk |
Started |
Started container sbdb | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_382ab4cf-c3b2-46cb-bc25-e729de4ce8cd became leader | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" in 11.91s (11.91s including waiting). Image size: 870581225 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: kube-multus-additional-cni-plugins | |
| (x9) | openshift-cluster-version |
kubelet |
cluster-version-operator-869c786959-vrvwt |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-multus |
kubelet |
multus-kk4tm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
| (x2) | openshift-multus |
kubelet |
multus-kk4tm |
Created |
Created container: kube-multus |
| (x2) | openshift-multus |
kubelet |
multus-kk4tm |
Started |
Started container kube-multus |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_029c0742-24a0-4060-9014-3f8c98ef3c56 became leader | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
BackOff |
Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(d78739a7694769882b7e47ea5ac08a10) | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-pcchm | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-c8csx | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-6964bb78b7 to 1 | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-6964bb78b7 |
SuccessfulCreate |
Created pod: network-check-source-6964bb78b7-g4lv2 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-m5stk | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-txl6b | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine |
| (x3) | kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler |
| (x3) | kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_25826bec-a6ad-4032-8d52-a53cd07b7801 became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kubecfg-setup | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Created |
Created container: approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Started |
Started container webhook | |
openshift-network-node-identity |
master-0_d6f32dcc-02fb-4208-b6c9-ed805f3d81be |
ovnkube-identity |
LeaderElection |
master-0_d6f32dcc-02fb-4208-b6c9-ed805f3d81be became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Created |
Created container: webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
default |
ovnk-controlplane |
master-0 |
ErrorAddingResource |
[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0] | |
default |
ovnkube-csr-approver-controller |
csr-4jc9b |
CSRApproved |
CSR "csr-4jc9b" has been approved | |
| (x9) | openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v429m" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-56f5898f45-fhnc5 |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-5b557b5f57-s5s96 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-n24qb | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" | |
openshift-authentication-operator |
multus |
authentication-operator-7479ffdf48-hpdzl |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-589f5cdc9d-5h2kn |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-7978bf889c-n64v4 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7b795784b8-44frm |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-667484ff5-n7qz8 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9" | |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" | |
openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110" | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Started |
Started container kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Created |
Created container: kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" | |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-5b557b5f57-s5s96_f9673649-d9a9-4234-987a-b061fe918e36 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.28" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x4) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x4) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.28"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe": pull QPS exceeded | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" | |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Failed |
Error: ErrImagePull | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Failed |
Error: ErrImagePull | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-target-pcchm |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" in 9.561s (9.561s including waiting). Image size: 512852463 bytes. | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/: secrets "control-plane-node-admin-client-cert-key" already exists | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" in 9.619s (9.619s including waiting). Image size: 500863090 bytes. | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Created |
Created container: network-check-target-container | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-7978bf889c-n64v4_77d83e21-5764-429c-929f-db486631a912 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.28"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.28" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Started |
Started container network-check-target-container | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-5f574c6c79-86bh9_4ed7c136-9e50-4d87-acb6-9ad2128adab0 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.28"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.28" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "controlPlane": map[string]any{"replicas": float64(1)}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
| (x6) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" in 586ms (587ms including waiting). Image size: 503025552 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
| (x6) | openshift-cluster-version |
kubelet |
cluster-version-operator-869c786959-vrvwt |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" already present on machine | |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-56f5898f45-fhnc5_2f08806b-bb04-4998-a584-446ded2cb0b6 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" in 3.129s (3.129s including waiting). Image size: 506755373 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container copy-catalogd-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: copy-catalogd-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" in 3.129s (3.129s including waiting). Image size: 442523452 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Created |
Created container: iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Started |
Started container iptables-alerter | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-6b8bb995f7 to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.sno.openstack.lab" | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-667484ff5-n7qz8_441f57a4-a308-4a8e-adad-84fb8cd9816f became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.28"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.28" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentUpdated |
Updated Deployment.apps/service-ca -n openshift-service-ca because it changed | |
openshift-service-ca |
multus |
service-ca-6b8bb995f7-b68p8 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-service-ca |
replicaset-controller |
service-ca-6b8bb995f7 |
SuccessfulCreate |
Created pod: service-ca-6b8bb995f7-b68p8 | |
openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
Started |
Started container service-ca-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110" |
openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
Created |
Created container: service-ca-controller | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "admission": map[string]any{ + "pluginConfig": map[string]any{ + "PodSecurity": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, + }, + }, + "apiServerArguments": map[string]any{ + "api-audiences": []any{string("https://kubernetes.default.svc")}, + "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "goaway-chance": []any{string("0")}, + "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, + "send-retry-after-while-not-ready-once": []any{string("true")}, + "service-account-issuer": []any{string("https://kubernetes.default.svc")}, + "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, + "shutdown-delay-duration": []any{string("0s")}, + }, + "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "gracefulTerminationDuration": string("15"), + "servicesSubnet": string("172.30.0.0/16"), + "servingInfo": map[string]any{ + "bindAddress": string("0.0.0.0:6443"), + "bindNetwork": string("tcp4"), + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + "namedCertificates": []any{ + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resou"...), + "keyFile": string("/etc/kubernetes/static-pod-resou"...), + }, + }, + }, } |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.28" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-6b8bb995f7-b68p8_1a9505fb-3caa-4156-b9ce-bd4707b5f5a3 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379,https://localhost:2379 |
| (x77) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.28"}] | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-client-token\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9" |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Started |
Started container csi-snapshot-controller-operator | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: copy-operator-controller-manifests | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9" in 1.22s (1.22s including waiting). Image size: 500957387 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110" in 2.218s (2.218s including waiting). Image size: 507701628 bytes. | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" in 4.049s (4.049s including waiting). Image size: 489542560 bytes. | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" in 887ms (887ms including waiting). Image size: 502450335 bytes. | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" in 889ms (889ms including waiting). Image size: 503354646 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container copy-operator-controller-manifests | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-b5dddf8f5-kwb74_16e67fb2-5322-4e45-8dfd-f575aa47d16c became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-86897dd478 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-86897dd478-qqwh7 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-86897dd478 to 1 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b795784b8-44frm_645a2aff-a428-4e6f-911f-65fafcb52d2a became leader | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" in 902ms (902ms including waiting). Image size: 499096673 bytes. | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-7479ffdf48-hpdzl_f4c44c78-9701-431c-a669-2599e86f7f25 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-client-token\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-7c4697b5f5-9f69p_ccc40433-ea89-4ffe-aa9d-0cd46aae4d23 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz_66150a42-cb85-44e5-99ea-b69d8351ec5f became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d00e4a8d28"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f779b92bb"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-2tgj7")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-86897dd478-qqwh7 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.28" |
| (x49) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.28"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.28" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreateFailed |
Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.28"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
| (x7) | openshift-controller-manager |
replicaset-controller |
controller-manager-56fb5cd58b |
FailedCreate |
Error creating: pods "controller-manager-56fb5cd58b-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-controller-manager |
replicaset-controller |
controller-manager-56fb5cd58b |
SuccessfulCreate |
Created pod: controller-manager-56fb5cd58b-5hnj2 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-56fb5cd58b to 1 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-5bcf58cf9c |
SuccessfulCreate |
Created pod: migrator-5bcf58cf9c-dvklg | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908" in 3.926s (3.926s including waiting). Image size: 505642108 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-5bcf58cf9c to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMissing |
no observedConfig |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container cluster-olm-operator | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: cluster-olm-operator | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-56fb5cd58b-5hnj2 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-56fb5cd58b-5hnj2 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-56fb5cd58b |
SuccessfulDelete |
Deleted pod: controller-manager-56fb5cd58b-5hnj2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-56fb5cd58b-5hnj2 |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6c79f444f7 |
SuccessfulCreate |
Created pod: controller-manager-6c79f444f7-8rmss | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6c79f444f7 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-56fb5cd58b to 0 from 1 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7ffbbcd969 |
SuccessfulCreate |
Created pod: route-controller-manager-7ffbbcd969-mkclq | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.28"}] | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.28" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7ffbbcd969 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-589f5cdc9d-5h2kn_30992995-be1a-47fb-b6b3-0282d6e9613b became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
openshift-kube-storage-version-migrator |
multus |
migrator-5bcf58cf9c-dvklg |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-56fb5cd58b-5hnj2 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
| (x7) | openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x7) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x7) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-image-registry |
multus |
cluster-image-registry-operator-65dc4bcb88-96zcz |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" already present on machine | |
openshift-kube-scheduler |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-dns-operator |
multus |
dns-operator-6b7bcd6566-jh9m8 |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-869c786959-vrvwt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-ingress-operator |
multus |
ingress-operator-85dbd94574-8jfp5 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Created |
Created container: openshift-controller-manager-operator |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Created |
Created container: authentication-operator |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-56fb5cd58b-5hnj2 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager"/"serving-cert" not registered |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
Started |
Started container authentication-operator |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7ffbbcd969-mkclq |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Started |
Started container openshift-controller-manager-operator |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-7c4697b5f5-9f69p_d4c1ca4d-b97f-4d89-b81a-3f885bee7a2e became leader | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-5f78c89466 to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7c895b7864 to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-7c895b7864 |
SuccessfulCreate |
Created pod: apiserver-7c895b7864-fxr2k | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment") | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-754cfd84 to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver |
kubelet |
apiserver-7c895b7864-fxr2k |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-2tgj7")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca" | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-7479ffdf48-hpdzl_967e7f89-9b7a-4b84-bc62-c6e1399986a0 became leader | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6c79f444f7 |
SuccessfulDelete |
Deleted pod: controller-manager-6c79f444f7-8rmss | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-595fbb95f7 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6c79f444f7 to 0 from 1 | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7ffbbcd969 to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-6c79f444f7-8rmss |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap,data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7787465f55 to 1 from 0 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-version |
kubelet |
cluster-version-operator-869c786959-vrvwt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" in 11.437s (11.437s including waiting). Image size: 512468025 bytes. | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-66bd7f46c9 |
SuccessfulCreate |
Created pod: route-controller-manager-66bd7f46c9-p8fcq | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Created |
Created container: snapshot-controller | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-595fbb95f7 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-595fbb95f7 |
SuccessfulDelete |
Deleted pod: controller-manager-595fbb95f7-nqxs8 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-66bd7f46c9 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-688676d587 to 1 from 0 | |
openshift-controller-manager |
multus |
controller-manager-6c79f444f7-8rmss |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-controller-manager |
replicaset-controller |
controller-manager-595fbb95f7 |
SuccessfulCreate |
Created pod: controller-manager-595fbb95f7-nqxs8 | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831" in 17.063s (17.063s including waiting). Image size: 458183681 bytes. | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" in 11.819s (11.819s including waiting). Image size: 437751308 bytes. | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7ffbbcd969 |
SuccessfulDelete |
Deleted pod: route-controller-manager-7ffbbcd969-mkclq | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7787465f55 |
SuccessfulCreate |
Created pod: route-controller-manager-7787465f55-49pjz | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7787465f55 to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-688676d587 |
SuccessfulCreate |
Created pod: controller-manager-688676d587-z9qm2 | |
| (x9) | openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-754cfd84 |
FailedCreate |
Error creating: pods "catalogd-controller-manager-754cfd84-" is forbidden: unable to validate against any security context constraint: provider "privileged": Forbidden: not usable by user or serviceaccount |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7787465f55 |
SuccessfulDelete |
Deleted pod: route-controller-manager-7787465f55-49pjz | |
| (x10) | openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-5f78c89466 |
FailedCreate |
Error creating: pods "operator-controller-controller-manager-5f78c89466-" is forbidden: unable to validate against any security context constraint: provider "privileged": Forbidden: not usable by user or serviceaccount |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Created |
Created container: migrator | |
openshift-controller-manager |
kubelet |
controller-manager-6c79f444f7-8rmss |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Started |
Started container snapshot-controller | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-869c786959-vrvwt |
Created |
Created container: cluster-version-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7ffbbcd969-mkclq |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-route-controller-manager"/"serving-cert" not registered | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-7c895b7864-fxr2k |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Started |
Started container graceful-termination | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-754cfd84 |
SuccessfulCreate |
Created pod: catalogd-controller-manager-754cfd84-qf898 | |
| (x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Created |
Created container: graceful-termination | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-869c786959-vrvwt |
Started |
Started container cluster-version-operator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Started |
Started container migrator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_2b135ec3-4c8b-4fc2-a9c5-8bd06ce6a733 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" already present on machine | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-688676d587 to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-569cbcf7fb to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-75c7768d99 to 1 from 0 | |
openshift-apiserver |
replicaset-controller |
apiserver-7c895b7864 |
SuccessfulDelete |
Deleted pod: apiserver-7c895b7864-fxr2k | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7c895b7864 to 0 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6985f84b49 to 1 from 0 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-66bd7f46c9 |
SuccessfulDelete |
Deleted pod: route-controller-manager-66bd7f46c9-p8fcq | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-66bd7f46c9 to 0 from 1 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-75c7768d99 |
SuccessfulCreate |
Created pod: route-controller-manager-75c7768d99-klvvl | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-86897dd478-qqwh7 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-86897dd478-qqwh7 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-apiserver |
replicaset-controller |
apiserver-6985f84b49 |
SuccessfulCreate |
Created pod: apiserver-6985f84b49-v9vlg | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
| (x2) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : secret "catalogserver-cert" not found |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.28" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.28" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.28"} {"csi-snapshot-controller" "4.18.28"}] | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-controller-manager |
replicaset-controller |
controller-manager-569cbcf7fb |
SuccessfulCreate |
Created pod: controller-manager-569cbcf7fb-99r5f | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-controller-manager |
replicaset-controller |
controller-manager-688676d587 |
SuccessfulDelete |
Deleted pod: controller-manager-688676d587-z9qm2 | |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-5f78c89466 |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-5f78c89466-bshxw | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : configmap references non-existent config key: ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-nnrcc" has been approved | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-5dbcf69784 |
SuccessfulCreate |
Created pod: route-controller-manager-5dbcf69784-65p95 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6c4bfbb4d5 |
SuccessfulCreate |
Created pod: controller-manager-6c4bfbb4d5-77st9 | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-75c7768d99 |
SuccessfulDelete |
Deleted pod: route-controller-manager-75c7768d99-klvvl | |
openshift-controller-manager |
replicaset-controller |
controller-manager-569cbcf7fb |
SuccessfulDelete |
Deleted pod: controller-manager-569cbcf7fb-99r5f | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-75c7768d99 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-5dbcf69784 to 1 from 0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-nnrcc" is created for OpenShiftAuthenticatorCertRequester | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
| (x65) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6c79f444f7-8rmss became leader | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf" in 12.383s (12.383s including waiting). Image size: 543241813 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca" in 12.382s (12.382s including waiting). Image size: 462741734 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-6c79f444f7-8rmss |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" in 10.978s (10.978s including waiting). Image size: 552687886 bytes. | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Created |
Created container: cluster-node-tuning-operator | |
openshift-controller-manager |
kubelet |
controller-manager-6c79f444f7-8rmss |
Created |
Created container: controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-6c79f444f7-8rmss |
Started |
Started container controller-manager | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Started |
Started container cluster-node-tuning-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Created |
Created container: dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Started |
Started container dns-operator | |
openshift-controller-manager |
kubelet |
controller-manager-6c79f444f7-8rmss |
Killing |
Stopping container controller-manager | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-65dc4bcb88-96zcz_4d11b2da-d3ad-40e6-8c4c-70bdef74ef05 became leader | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Started |
Started container cluster-image-registry-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Created |
Created container: cluster-image-registry-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" in 12.447s (12.447s including waiting). Image size: 672854011 bytes. | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-5f78c89466-bshxw |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" in 13.054s (13.054s including waiting). Image size: 505663073 bytes. | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bbd9b9dff-rrfsm_5847d5fb-02a3-4739-af8d-b53f199148db |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-bbd9b9dff-rrfsm_5847d5fb-02a3-4739-af8d-b53f199148db became leader | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Created |
Created container: kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Started |
Started container kube-rbac-proxy | |
openshift-catalogd |
multus |
catalogd-controller-manager-754cfd84-qf898 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66bd7f46c9-p8fcq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" | |
openshift-apiserver |
multus |
apiserver-6985f84b49-v9vlg |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-66bd7f46c9-p8fcq |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-7zkbg |
Started |
Started container tuned | |
openshift-catalogd |
catalogd-controller-manager-754cfd84-qf898_61959568-7a97-4726-be78-670811ea9c9a |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-754cfd84-qf898_61959568-7a97-4726-be78-670811ea9c9a became leader | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Started |
Started container manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Created |
Created container: manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Created |
Created container: manager | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-5m4f8 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Started |
Started container manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-7zkbg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-7zkbg | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-7zkbg |
Created |
Created container: tuned | |
openshift-operator-controller |
operator-controller-controller-manager-5f78c89466-bshxw_6a991e5c-6d33-4a5f-880f-ee858df5bff4 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-5f78c89466-bshxw_6a991e5c-6d33-4a5f-880f-ee858df5bff4 became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-6c4bfbb4d5-77st9 |
Started |
Started container controller-manager | |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" already present on machine | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-54f97f57 to 1 | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-4xlhs | |
openshift-ingress |
replicaset-controller |
router-default-54f97f57 |
SuccessfulCreate |
Created pod: router-default-54f97f57-rr9px | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6" | |
openshift-controller-manager |
kubelet |
controller-manager-6c4bfbb4d5-77st9 |
Created |
Created container: controller-manager | |
openshift-controller-manager |
multus |
controller-manager-6c4bfbb4d5-77st9 |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6c4bfbb4d5-77st9 became leader | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-6c4bfbb4d5-77st9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine | |
openshift-kube-scheduler |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-5m4f8 |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Started |
Started container dns-node-resolver | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Created |
Created container: dns-node-resolver | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 6 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-869c786959 to 0 from 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-869c786959 |
SuccessfulDelete |
Deleted pod: cluster-version-operator-869c786959-vrvwt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/webhook-authenticator has been created" | |
openshift-cluster-version |
kubelet |
cluster-version-operator-869c786959-vrvwt |
Killing |
Stopping container cluster-version-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-controller-manager: cause by changes in data.ca-bundle.crt |
openshift-controller-manager |
replicaset-controller |
controller-manager-6c4bfbb4d5 |
SuccessfulDelete |
Deleted pod: controller-manager-6c4bfbb4d5-77st9 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6fb5f97c4d |
SuccessfulCreate |
Created pod: controller-manager-6fb5f97c4d-bcdbq | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no pods available on any node." to "Available: no route controller manager deployment pods available on any node.",status.versions changed from [] to [{"operator" "4.18.28"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-route-controller-manager: cause by changes in data.ca-bundle.crt |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
| (x4) | openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set controller-manager-6fb5f97c4d to 1 from 0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set route-controller-manager-7f6f54d5f6 to 1 from 0 |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.28" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f6f54d5f6 |
SuccessfulCreate |
Created pod: route-controller-manager-7f6f54d5f6-ch42s | |
openshift-controller-manager |
kubelet |
controller-manager-6c4bfbb4d5-77st9 |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-5dbcf69784 |
SuccessfulDelete |
Deleted pod: route-controller-manager-5dbcf69784-65p95 | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51" in 7.489s (7.489s including waiting). Image size: 583850203 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66bd7f46c9-p8fcq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" in 7.252s (7.253s including waiting). Image size: 481573011 bytes. | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Created |
Created container: kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6" in 5.275s (5.275s including waiting). Image size: 478655954 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66bd7f46c9-p8fcq |
Created |
Created container: route-controller-manager | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Started |
Started container kube-rbac-proxy | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66bd7f46c9-p8fcq |
Started |
Started container route-controller-manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Created |
Created container: dns | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Started |
Started container dns | |
openshift-multus |
multus |
network-metrics-daemon-ch7xd |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d" | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Created |
Created container: fix-audit-permissions | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4e0b20fdb38d516e871ff5d593c4273cc9933cb6a65ec93e727ca4a7777fd20" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-69cc794c58-mfjk2 |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-78ddcf56f9-8l84w |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-75b4d49d4c-h599p |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51" already present on machine | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66bd7f46c9-p8fcq |
Killing |
Stopping container route-controller-manager | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-7c49fbfc6f |
SuccessfulCreate |
Created pod: cluster-version-operator-7c49fbfc6f-7krqx | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Created |
Created container: network-operator | |
openshift-controller-manager |
kubelet |
controller-manager-6fb5f97c4d-bcdbq |
Started |
Started container controller-manager | |
openshift-controller-manager |
multus |
controller-manager-6fb5f97c4d-bcdbq |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-6fb5f97c4d-bcdbq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine | |
openshift-marketplace |
multus |
marketplace-operator-7d67745bb7-dwcxb |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601" | |
openshift-controller-manager |
kubelet |
controller-manager-6fb5f97c4d-bcdbq |
Created |
Created container: controller-manager | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-66bd7f46c9-p8fcq_3c04093b-cebf-4999-8030-b8c02ae5b738 became leader | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
Created |
Created container: openshift-apiserver | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Started |
Started container kube-rbac-proxy | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-7c49fbfc6f to 1 | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_9a5b3d68-dc2e-4e05-af50-cab4810471fe became leader | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Started |
Started container network-operator | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_1ca3d2c4-c8a0-4139-928f-7f7774f88234 became leader | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Started |
Started container cluster-version-operator | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-57fd58bc7b |
SuccessfulCreate |
Created pod: apiserver-57fd58bc7b-kktql | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Created |
Created container: cluster-version-operator | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-57fd58bc7b to 1 | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6fb5f97c4d-bcdbq became leader | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}} | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" | |
| (x49) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver: client rate limiter Wait returned an error: context canceled | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Started |
Started container multus-admission-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4e0b20fdb38d516e871ff5d593c4273cc9933cb6a65ec93e727ca4a7777fd20" in 5.514s (5.514s including waiting). Image size: 478931717 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Started |
Started container cluster-monitoring-operator | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Created |
Created container: multus-admission-controller | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Created |
Created container: kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Started |
Started container kube-apiserver-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Started |
Started container marketplace-operator | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64" | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Created |
Created container: marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601" in 5.295s (5.295s including waiting). Image size: 452603646 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" in 5.476s (5.476s including waiting). Image size: 451053419 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d" in 5.456s (5.457s including waiting). Image size: 443305841 bytes. | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-q98rt" has been approved | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.28"}] to [{"operator" "4.18.28"} {"openshift-apiserver" "4.18.28"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.11:48709->172.30.0.10:53: read: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.11:48709->172.30.0.10:53: read: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.11:42336->172.30.0.10:53: read: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.11:42336->172.30.0.10:53: read: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Started |
Started container network-metrics-daemon | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.28" | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Created |
Created container: network-metrics-daemon | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-oauth-apiserver |
multus |
apiserver-57fd58bc7b-kktql |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f6f54d5f6-ch42s |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f6f54d5f6-ch42s |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f6f54d5f6-ch42s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-q98rt" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-k8ls6" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-k8ls6" has been approved | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.21:8080/healthz": dial tcp 10.128.0.21:8080: connect: connection refused | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
ProbeError |
Readiness probe error: Get "http://10.128.0.21:8080/healthz": dial tcp 10.128.0.21:8080: connect: connection refused body: | |
openshift-route-controller-manager |
multus |
route-controller-manager-7f6f54d5f6-ch42s |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Created |
Created container: kube-rbac-proxy | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Started |
Started container kube-rbac-proxy | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-7f6f54d5f6-ch42s_4ea95845-2449-4b74-a589-49a41c437349 became leader | |
openshift-kube-controller-manager |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-5b557b5f57-s5s96_fb18780f-af94-4a83-b20f-68d26a4d7f4c became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-6d4cbfb4b to 1 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-6d4cbfb4b |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.37:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.37:8443/apis/template.openshift.io/v1: 401" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
| (x21) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5bdcc987c4 |
SuccessfulCreate |
Created pod: multus-admission-controller-5bdcc987c4-x99xc | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5bdcc987c4 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" in 12.832s (12.832s including waiting). Image size: 857083855 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf" in 6.595s (6.595s including waiting). Image size: 499812475 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": secret openshift-kube-controller-manager/localhost-recovery-client-token hasn't been populated with SA token yet: missing SA UID" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Created |
Created container: package-server-manager | |
openshift-multus |
multus |
multus-admission-controller-5bdcc987c4-x99xc |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config-2 -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Started |
Started container fix-audit-permissions | |
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcdctl | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Started |
Started container multus-admission-controller | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Started |
Started container package-server-manager | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Created |
Created container: kube-rbac-proxy | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Started |
Started container oauth-apiserver | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
ProbeError |
Startup probe error: Get "https://10.128.0.43:8443/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Unhealthy |
Startup probe failed: Get "https://10.128.0.43:8443/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
BackOff |
Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-5f574c6c79-86bh9_openshift-kube-scheduler-operator(5aa67ace-d03a-4d06-9fb5-24777b65f2cc) | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
ProbeError |
Startup probe error: Get "https://10.128.0.43:8443/livez": context deadline exceeded body: | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Unhealthy |
Startup probe failed: Get "https://10.128.0.43:8443/livez": context deadline exceeded | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Unhealthy |
Startup probe failed: Get "https://10.128.0.43:8443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
ProbeError |
Startup probe error: Get "https://10.128.0.43:8443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Unhealthy |
Startup probe failed: Get "https://10.128.0.43:8443/livez": read tcp 10.128.0.2:51468->10.128.0.43:8443: read: connection reset by peer | |
openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
ProbeError |
Startup probe error: Get "https://10.128.0.43:8443/livez": read tcp 10.128.0.2:51468->10.128.0.43:8443: read: connection reset by peer body: | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nAPIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" to "APIServerDeploymentProgressing: deployment/openshift-oauth-apiserver: could not be retrieved",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" already present on machine | |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
Unhealthy |
Startup probe failed: Get "https://10.128.0.43:8443/livez": dial tcp 10.128.0.43:8443: connect: connection refused |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
ProbeError |
Startup probe error: Get "https://10.128.0.43:8443/livez": dial tcp 10.128.0.43:8443: connect: connection refused body: |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Started |
Started container openshift-apiserver-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Started |
Started container service-ca-operator |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
Created |
Created container: etcd-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Created |
Created container: service-ca-operator |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Created |
Created container: approver | |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Started |
Started container kube-scheduler-operator-container |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
Started |
Started container etcd-operator |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Created |
Created container: kube-controller-manager-operator |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nAPIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nAPIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Created |
Created container: kube-scheduler-operator-container |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Created |
Created container: openshift-apiserver-operator |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nAPIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing message changed from "APIServerDeploymentProgressing: deployment/openshift-oauth-apiserver: could not be retrieved" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available",Available message changed from "APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Started |
Started container kube-controller-manager-operator |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Created |
Created container: kube-storage-version-migrator-operator |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Started |
Started container kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0 I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1203 13:56:06.068416 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1203 13:56:06.068461 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0 F1203 13:56:50.073979 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
ProbeError |
Liveness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body: | |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:05.882333 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068211 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:06.068416 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.068461 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:06.072669 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:36.073133 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:50.073979 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_e2db93f6-3e60-4d45-9089-7e7fe65c131e became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_505c510f-b8ad-4a50-b125-cac65e2636e6 became leader | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-apiserver |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-7c4dc67499 |
SuccessfulCreate |
Created pod: cloud-credential-operator-7c4dc67499-tjwg8 | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-f84784664 |
SuccessfulCreate |
Created pod: cluster-storage-operator-f84784664-ntb9w | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-6d64b47964 to 1 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-59d99f9b7b to 1 | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-7c4dc67499 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-7f88444875 to 1 | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-f84784664 to 1 | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-6d64b47964 |
SuccessfulCreate |
Created pod: cluster-samples-operator-6d64b47964-jjd7h | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-5775bfbf6d to 1 | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-5775bfbf6d |
SuccessfulCreate |
Created pod: machine-approver-5775bfbf6d-vtvbd | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-5fdc576499 to 1 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-66f4cc99d4 to 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-76f56467d7 to 1 | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-7486ff55f to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-76f56467d7 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-76f56467d7-252sh | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"),status.versions changed from [{"operator" "4.18.28"}] to [{"operator" "4.18.28"} {"oauth-apiserver" "4.18.28"}] | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-7486ff55f |
SuccessfulCreate |
Created pod: machine-api-operator-7486ff55f-wcnxg | |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Created |
Created container: ingress-operator |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Started |
Started container ingress-operator |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-66f4cc99d4 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-66f4cc99d4-x278n | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-7f88444875 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-7f88444875-6dk29 | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-5fdc576499 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-5fdc576499-j2n8j | |
openshift-insights |
replicaset-controller |
insights-operator-59d99f9b7b |
SuccessfulCreate |
Created pod: insights-operator-59d99f9b7b-74sss | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-664c9d94c9 |
SuccessfulCreate |
Created pod: machine-config-operator-664c9d94c9-9vfr4 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-664c9d94c9 to 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-78ddcf56f9 to 0 from 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-78ddcf56f9 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-78ddcf56f9-8l84w | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.28" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-5f574c6c79-86bh9_d4ba20b5-22dd-42a5-94d9-3d5c3fb6e1e1 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Killing |
Stopping container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz_61e499f7-e943-4ee8-b8fd-e4a3859cd940 became leader | |
openshift-multus |
kubelet |
multus-admission-controller-78ddcf56f9-8l84w |
Killing |
Stopping container multus-admission-controller | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-76bd5d69c7 to 1 | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-7cf5cf757f |
SuccessfulCreate |
Created pod: catalog-operator-7cf5cf757f-zgm6l | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-76bd5d69c7 |
SuccessfulCreate |
Created pod: olm-operator-76bd5d69c7-fjrrg | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-7cf5cf757f to 1 | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03" | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-66f4cc99d4-x278n |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-7c4dc67499-tjwg8 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-76bd5d69c7-fjrrg |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-operator-664c9d94c9-9vfr4 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-baremetal-operator-5fdc576499-j2n8j |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad" | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Started |
Started container kube-rbac-proxy | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44e82a51fce7b5996b183c10c44bd79b0e1ae2257fd5809345fbca1c50aaa08f" | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-api |
multus |
machine-api-operator-7486ff55f-wcnxg |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-f84784664-ntb9w |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328" | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-6d64b47964-jjd7h |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-7cf5cf757f-zgm6l |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-7f88444875-6dk29 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Created |
Created container: kube-rbac-proxy | |
openshift-insights |
multus |
insights-operator-59d99f9b7b-74sss |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Started |
Started container olm-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 5" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfc0403f71f7c926db1084c7fb5fb4f19007271213ee34f6f3d3eecdbe817d6b" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d41c3e944e86b73b4ba0d037ff016562211988f3206b9deb6cc7dccca708248" | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Created |
Created container: olm-operator | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Created |
Created container: machine-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Started |
Started container machine-config-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Created |
Created container: catalog-operator | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.28} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a}] | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Started |
Started container catalog-operator | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Created |
Created container: kube-rbac-proxy | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7f6f54d5f6 to 0 from 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6fb5f97c4d to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-6fb5f97c4d-bcdbq |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6fb5f97c4d |
SuccessfulDelete |
Deleted pod: controller-manager-6fb5f97c4d-bcdbq | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7.",Available changed from False to True ("All is well") | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6fcd4b8856 |
SuccessfulCreate |
Created pod: route-controller-manager-6fcd4b8856-ztns6 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f6f54d5f6-ch42s |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f6f54d5f6 |
SuccessfulDelete |
Deleted pod: route-controller-manager-7f6f54d5f6-ch42s | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6fcd4b8856 to 1 from 0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7d8fb964c9 to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d8fb964c9 |
SuccessfulCreate |
Created pod: controller-manager-7d8fb964c9-v2h98 | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-2ztl9 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-68c95b6cf5 |
SuccessfulCreate |
Created pod: openshift-config-operator-68c95b6cf5-fmdmz | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-68c95b6cf5 to 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" in 25.609s (25.609s including waiting). Image size: 551903461 bytes. | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1" in 25.273s (25.273s including waiting). Image size: 461716546 bytes. | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" in 25.495s (25.495s including waiting). Image size: 449985691 bytes. | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328" in 25.762s (25.762s including waiting). Image size: 465158513 bytes. | |
openshift-marketplace |
multus |
certified-operators-t8rt7 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d41c3e944e86b73b4ba0d037ff016562211988f3206b9deb6cc7dccca708248" in 24.078s (24.078s including waiting). Image size: 450855746 bytes. | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfc0403f71f7c926db1084c7fb5fb4f19007271213ee34f6f3d3eecdbe817d6b" in 27.539s (27.539s including waiting). Image size: 874839630 bytes. | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad" in 28.266s (28.266s including waiting). Image size: 856674149 bytes. | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44e82a51fce7b5996b183c10c44bd79b0e1ae2257fd5809345fbca1c50aaa08f" in 28.677s (28.677s including waiting). Image size: 499138950 bytes. | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e" in 27.351s (27.351s including waiting). Image size: 465302163 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03" in 28.369s (28.369s including waiting). Image size: 508056015 bytes. | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
multus |
redhat-operators-6rjqz |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Started |
Started container cluster-storage-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Created |
Created container: cluster-storage-operator | |
openshift-config-operator |
multus |
openshift-config-operator-68c95b6cf5-fmdmz |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e0e3400f1cb68a205bfb841b6b1a78045e7d80703830aa64979d46418d19c835" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-6fcd4b8856-ztns6 |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-kube-scheduler |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Started |
Started container cluster-samples-operator | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Created |
Created container: cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
cluster-autoscaler-operator-7f88444875-6dk29_c098c173-9c63-41d0-b436-7ad45efcf0f0 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-7f88444875-6dk29_c098c173-9c63-41d0-b436-7ad45efcf0f0 became leader | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Created |
Created container: cluster-baremetal-operator | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-machine-api |
control-plane-machine-set-operator-66f4cc99d4-x278n_30af8e24-fc6c-4fd0-b6c7-8bd41b865cef |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-66f4cc99d4-x278n_30af8e24-fc6c-4fd0-b6c7-8bd41b865cef became leader | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Started |
Started container control-plane-machine-set-operator | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
multus |
community-operators-7fwtv |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Started |
Started container insights-operator | |
openshift-marketplace |
multus |
community-operators-582c5 |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Created |
Created container: insights-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Created |
Created container: cluster-samples-operator | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Started |
Started container machine-approver-controller | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Created |
Created container: machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Started |
Started container machine-api-operator | |
openshift-marketplace |
multus |
redhat-marketplace-mtm6s |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Created |
Created container: machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Created |
Created container: machine-approver-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
master-0_3456b571-7a2c-4350-a3dd-35e584326b28 |
cluster-machine-approver-leader |
LeaderElection |
master-0_3456b571-7a2c-4350-a3dd-35e584326b28 became leader | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Created |
Created container: cloud-credential-operator | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-controller-manager |
multus |
controller-manager-7d8fb964c9-v2h98 |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine | |
openshift-cloud-controller-manager-operator |
master-0_1114c6d9-cde2-496a-9385-25d5d33dc68a |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_1114c6d9-cde2-496a-9385-25d5d33dc68a became leader | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine |
openshift-network-node-identity |
master-0_016adab6-948e-46a5-9aeb-5cf34c8a56db |
ovnkube-identity |
LeaderElection |
master-0_016adab6-948e-46a5-9aeb-5cf34c8a56db became leader | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
FailedMount |
MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
FailedMount |
MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
machine-config-operator |
openshift-machine-config-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_2c130d43-f766-4f94-87c7-7edce109791e became leader | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e0e3400f1cb68a205bfb841b6b1a78045e7d80703830aa64979d46418d19c835" | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
Created |
Created container: controller-manager | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-5775bfbf6d to 0 from 1 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
Started |
Started container route-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
master-0_027319c7-1dcf-46d6-8dee-f0d258106532 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_027319c7-1dcf-46d6-8dee-f0d258106532 became leader | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: extract-utilities | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
Created |
Created container: route-controller-manager | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Started |
Started container extract-utilities | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
Started |
Started container controller-manager | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Created |
Created container: config-sync-controllers | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Started |
Started container kube-rbac-proxy | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
BackOff |
Back-off restarting failed container insights-operator in pod insights-operator-59d99f9b7b-74sss_openshift-insights(c95705e3-17ef-40fe-89e8-22586a32621b) | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.28"}] | |
| (x2) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.28" |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-f84784664-ntb9w_7a6e3037-de46-4da5-b222-80cab0f67e05 became leader | |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5fdc576499-j2n8j_openshift-machine-api(690d1f81-7b1f-4fd0-9b6e-154c9687c744) |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
| (x4) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
BackOff |
Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b) |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Killing |
Stopping container machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5775bfbf6d-vtvbd |
Killing |
Stopping container kube-rbac-proxy | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6fcd4b8856-ztns6_a1003e38-f98a-41f8-b15f-91c44d8b8999 became leader | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-5775bfbf6d |
SuccessfulDelete |
Deleted pod: machine-approver-5775bfbf6d-vtvbd | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-76f56467d7 |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-76f56467d7-252sh | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Killing |
Stopping container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-76f56467d7 to 0 from 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Killing |
Stopping container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-76f56467d7-252sh |
Killing |
Stopping container kube-rbac-proxy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
| (x2) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44e82a51fce7b5996b183c10c44bd79b0e1ae2257fd5809345fbca1c50aaa08f" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e" already present on machine |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-cb84b9cdf to 1 | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-cb84b9cdf |
SuccessfulCreate |
Created pod: machine-approver-cb84b9cdf-qn94w | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.13" | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7d8fb964c9-v2h98 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.28"}] to [{"raw-internal" "4.18.28"} {"operator" "4.18.28"} {"kube-apiserver" "1.31.13"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.28" | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdDeploymentCatalogdControllerManagerDegraded: Get \"https://172.30.0.1:443/apis/operator.openshift.io/v1/olms/cluster\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: Get \"https://172.30.0.1:443/apis/operator.openshift.io/v1/olms/cluster\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e0e3400f1cb68a205bfb841b6b1a78045e7d80703830aa64979d46418d19c835" in 33.488s (33.488s including waiting). Image size: 433128028 bytes. | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-master-0 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b)" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 37.204s (37.204s including waiting). Image size: 1129027903 bytes. | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Created |
Created container: cluster-baremetal-operator |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 37.877s (37.877s including waiting). Image size: 1204969293 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 37.319s (37.319s including waiting). Image size: 1609873225 bytes. | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: ",Progressing changed from False to True (""),Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 37.35s (37.35s including waiting). Image size: 1201319250 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.28 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-6c74dddbfb to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6c74dddbfb |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-mtm6s |
Started |
Started container extract-content | |
| (x2) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Created |
Created container: insights-operator |
| (x2) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Started |
Started container insights-operator |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Created |
Created container: openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Started |
Started container openshift-api | |
openshift-marketplace |
kubelet |
redhat-operators-6rjqz |
Created |
Created container: extract-content | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-machine-api |
cluster-baremetal-operator-5fdc576499-j2n8j_c05cafef-f200-4555-9529-c6ca54a85720 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-5fdc576499-j2n8j_c05cafef-f200-4555-9529-c6ca54a85720 became leader | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container extract-content | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Started |
Started container cluster-baremetal-operator |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container extract-content | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-74cddd4fb5 |
SuccessfulCreate |
Created pod: machine-config-controller-74cddd4fb5-phk6r | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-74cddd4fb5 to 1 | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251" | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-master-0 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b)" to "NodeControllerDegraded: All master nodes are ready" | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_8cebb117-7c3d-4dbf-9edc-8ec68c0164cc became leader | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine | |
openshift-kube-scheduler |
static-pod-installer |
openshift-kube-scheduler |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-machine-config-operator |
multus |
machine-config-controller-74cddd4fb5-phk6r |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 49.095s (49.095s including waiting). Image size: 912736453 bytes. | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container registry-server | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 49.057s (49.057s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container registry-server | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container cluster-cloud-controller-manager | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Started |
Started container openshift-config-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Created |
Created container: openshift-config-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2") | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251" in 48.877s (48.877s including waiting). Image size: 490470354 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 2 because static pod is ready | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: cluster-cloud-controller-manager | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.13" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-ddwmn |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.28"}] to [{"raw-internal" "4.18.28"} {"kube-scheduler" "1.31.13"} {"operator" "4.18.28"}] | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.28" |
openshift-marketplace |
multus |
redhat-operators-6z4sc |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: extract-utilities | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Created |
Created container: machine-approver-controller | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container config-sync-controllers | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container extract-utilities | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Started |
Started container machine-approver-controller | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: kube-rbac-proxy | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
ProbeError |
Readiness probe error: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused body: |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
ProbeError |
Liveness probe error: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused body: |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cluster-machine-approver |
master-0_5d595ff6-1690-4986-a203-86875e0a9b9f |
cluster-machine-approver-leader |
LeaderElection |
master-0_5d595ff6-1690-4986-a203-86875e0a9b9f became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Created |
Created container: machine-config-controller | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: extract-utilities | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 3.607s (3.607s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container extract-utilities | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_1d7bc4d4-5977-4fbb-b138-5f4d96afde4d became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-68c95b6cf5-fmdmz_49beae0e-1c65-4f11-907f-1b2f57f28818 became leader | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Started |
Started container machine-config-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.28"} {"operator" "4.18.28"}] | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 390ms (390ms including waiting). Image size: 912736453 bytes. | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.28" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.28" | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{LatencySensitiveRemovalControllerDegraded False 2025-12-03 14:00:22 +0000 UTC AsExpected } {OperatorAvailable True 2025-12-03 14:00:22 +0000 UTC AsExpected } {OperatorProgressing False 2025-12-03 14:00:22 +0000 UTC AsExpected } {OperatorUpgradeable True 2025-12-03 14:00:22 +0000 UTC AsExpected }] | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8." | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 658ms (658ms including waiting). Image size: 1609873225 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 500ms (500ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container registry-server | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed4dc45b0e0d6229620e2ac6a53ecd180cad44a11daf9f0170d94b4acd35ded" | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f870aa3c7bcd039c7905b2c7a9e9c0776d76ed4cf34ccbef872ae7ad8cf2157f" | |
openshift-network-diagnostics |
multus |
network-check-source-6964bb78b7-g4lv2 |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Created |
Created container: check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Started |
Started container check-endpoints | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_aa373a22-a81d-484a-b695-d75e76aff7ac became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
Killing |
Stopping container route-controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries } | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412840 |
SuccessfulCreate |
Created pod: collect-profiles-29412840-nfbpl | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d8fb964c9 |
SuccessfulDelete |
Deleted pod: controller-manager-7d8fb964c9-v2h98 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-77df56447c to 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5c8b4c9687 |
SuccessfulCreate |
Created pod: controller-manager-5c8b4c9687-4pxw5 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7d8fb964c9 to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5c8b4c9687 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7d8fb964c9-v2h98 |
Killing |
Stopping container controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-vkpv4 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f870aa3c7bcd039c7905b2c7a9e9c0776d76ed4cf34ccbef872ae7ad8cf2157f" in 6.35s (6.35s including waiting). Image size: 439054449 bytes. | |
openshift-console-operator |
replicaset-controller |
console-operator-77df56447c |
SuccessfulCreate |
Created pod: console-operator-77df56447c-vsrxx | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-84f75d5446 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6fcd4b8856 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-84f75d5446 |
SuccessfulCreate |
Created pod: route-controller-manager-84f75d5446-j8tkx | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29412840 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6fcd4b8856 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6fcd4b8856-ztns6 | |
openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed4dc45b0e0d6229620e2ac6a53ecd180cad44a11daf9f0170d94b4acd35ded" in 6.87s (6.87s including waiting). Image size: 481523147 bytes. | |
openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
Created |
Created container: router | |
openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
Started |
Started container router | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-565bdcb8 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412840-nfbpl |
Created |
Created container: collect-profiles | |
openshift-ingress-canary |
multus |
ingress-canary-vkpv4 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-565bdcb8 |
SuccessfulCreate |
Created pod: prometheus-operator-565bdcb8-477pk | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " | |
openshift-console-operator |
multus |
console-operator-77df56447c-vsrxx |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412840-nfbpl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29412840-nfbpl |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89b279931fe13f3b33c9dd6cdf0f5e7fc3e5384b944f998034d35af7242a47fa" | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412840-nfbpl |
Started |
Started container collect-profiles | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Started |
Started container serve-healthcheck-canary | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5c8b4c9687-4pxw5 became leader | |
openshift-monitoring |
multus |
prometheus-operator-565bdcb8-477pk |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:903557bdbb44cf720481cc9b305a8060f327435d303c95e710b92669ff43d055" | |
openshift-controller-manager |
kubelet |
controller-manager-5c8b4c9687-4pxw5 |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-5c8b4c9687-4pxw5 |
Created |
Created container: controller-manager | |
openshift-controller-manager |
multus |
controller-manager-5c8b4c9687-4pxw5 |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-5c8b4c9687-4pxw5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-pvrfs | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-459a0309a4bacb184a38028403c86289 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.66:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6fcd4b8856-ztns6 |
ProbeError |
Readiness probe error: Get "https://10.128.0.66:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-459a0309a4bacb184a38028403c86289 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason=missing MachineConfig rendered-master-459a0309a4bacb184a38028403c86289 machineconfig.machineconfiguration.openshift.io "rendered-master-459a0309a4bacb184a38028403c86289" not found | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Started |
Started container machine-config-server | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-79f7f4d988 to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Created |
Created container: machine-config-server | |
openshift-authentication |
replicaset-controller |
oauth-openshift-79f7f4d988 |
SuccessfulCreate |
Created pod: oauth-openshift-79f7f4d988-pxd4d | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-2072f444cb169be2ed482bc255f04f4f successfully generated (release version: 4.18.28, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-337418c1f91a4453abaa311a5ee047f8 successfully generated (release version: 4.18.28, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f) | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412840 |
Completed |
Job completed | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:903557bdbb44cf720481cc9b305a8060f327435d303c95e710b92669ff43d055" in 6.818s (6.818s including waiting). Image size: 456021712 bytes. | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-77df56447c-vsrxx_f50a719a-3cde-4ccc-9eaa-193863189ca8 became leader | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89b279931fe13f3b33c9dd6cdf0f5e7fc3e5384b944f998034d35af7242a47fa" in 7.694s (7.694s including waiting). Image size: 506716062 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Started |
Started container prometheus-operator | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-79f7f4d988-pxd4d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e" | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Created |
Created container: console-operator | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29412840, condition: Complete | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Started |
Started container console-operator | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
ProbeError |
Readiness probe error: Get "https://10.128.0.75:8443/readyz": dial tcp 10.128.0.75:8443: connect: connection refused body: | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Created |
Created container: prometheus-operator | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.75:8443/readyz": dial tcp 10.128.0.75:8443: connect: connection refused | |
openshift-authentication |
multus |
oauth-openshift-79f7f4d988-pxd4d |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.18.28: error during syncRequiredMachineConfigPools: context deadline exceeded | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Created |
Created container: kube-rbac-proxy | |
openshift-console |
multus |
downloads-6f5db8559b-96ljh |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d886210d2faa9ace5750adfc70c0c3c5512cdf492f19d1c536a446db659aabb" | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.28" | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.28"}] | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-6f5db8559b to 1 | |
| (x2) | openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found |
openshift-console |
replicaset-controller |
downloads-6f5db8559b |
SuccessfulCreate |
Created pod: downloads-6f5db8559b-96ljh | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-79f7f4d988-pxd4d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e" in 2.135s (2.135s including waiting). Image size: 475935749 bytes. | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-79f7f4d988-pxd4d |
Started |
Started container oauth-openshift | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-57cbc648f8 |
SuccessfulCreate |
Created pod: openshift-state-metrics-57cbc648f8-q4cgg | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7dcc7f9bd6 |
SuccessfulCreate |
Created pod: kube-state-metrics-7dcc7f9bd6-68wml | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7dcc7f9bd6 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-79f7f4d988-pxd4d |
Created |
Created container: oauth-openshift | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreateFailed |
Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-57cbc648f8 to 1 | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-b62gf | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-2072f444cb169be2ed482bc255f04f4f | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node master-0 to MachineConfig: rendered-master-2072f444cb169be2ed482bc255f04f4f | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-ac51e1cc2e343e3be6926ba118fd6150 successfully generated (release version: 4.18.28, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f) | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0737727dcbfb50c3c09b69684ba3c07b5a4ab7652bbe4970a46d6a11c4a2bca" | |
openshift-monitoring |
multus |
kube-state-metrics-7dcc7f9bd6-68wml |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e39fd49a8aa33e4b750267b4e773492b85c08cc7830cd7b22e64a92bcb5b6729" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8." to "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no route controller manager deployment pods available on any node.") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
multus |
openshift-state-metrics-57cbc648f8-q4cgg |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.28_openshift" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.28"} {"oauth-apiserver" "4.18.28"}] to [{"operator" "4.18.28"} {"oauth-apiserver" "4.18.28"} {"oauth-openshift" "4.18.28_openshift"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-459a0309a4bacb184a38028403c86289 successfully generated (release version: 4.18.28, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f) | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84f75d5446-j8tkx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-cc996c4bd to 1 | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Created |
Created container: init-textfile | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84f75d5446-j8tkx |
Started |
Started container route-controller-manager | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0" in 1.855s (1.855s including waiting). Image size: 412194448 bytes. | |
openshift-monitoring |
replicaset-controller |
thanos-querier-cc996c4bd |
SuccessfulCreate |
Created pod: thanos-querier-cc996c4bd-j4hzr | |
openshift-route-controller-manager |
multus |
route-controller-manager-84f75d5446-j8tkx |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Started |
Started container init-textfile | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84f75d5446-j8tkx |
Created |
Created container: route-controller-manager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-33kamir7f7ukf -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF") | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-84f75d5446-j8tkx_ceee8b52-d67f-416f-801c-72b196ea0ae7 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: openshift-state-metrics | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e39fd49a8aa33e4b750267b4e773492b85c08cc7830cd7b22e64a92bcb5b6729" in 2.49s (2.49s including waiting). Image size: 426456059 bytes. | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-c5d7cd7f9 to 1 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" in 1.59s (1.59s including waiting). Image size: 432391273 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-console |
replicaset-controller |
console-c5d7cd7f9 |
SuccessfulCreate |
Created pod: console-c5d7cd7f9-2hp75 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "All is well" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13" | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab |
openshift-console |
multus |
console-c5d7cd7f9-2hp75 |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7" | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
multus |
thanos-querier-cc996c4bd-j4hzr |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" | |
openshift-monitoring |
replicaset-controller |
metrics-server-555496955b |
SuccessfulCreate |
Created pod: metrics-server-555496955b-vpcbs | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-555496955b to 1 | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0737727dcbfb50c3c09b69684ba3c07b5a4ab7652bbe4970a46d6a11c4a2bca" in 4.013s (4.013s including waiting). Image size: 435033168 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-2bc14vqi7sofg -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-547cc9cc49 to 1 | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-547cc9cc49 |
SuccessfulCreate |
Created pod: monitoring-plugin-547cc9cc49-kqs4k | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Started |
Started container kube-rbac-proxy-main | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Started |
Started container kube-rbac-proxy-self | |
openshift-operator-lifecycle-manager |
package-server-manager-75b4d49d4c-h599p_5769ea85-326e-447b-a03c-b276835fb71f |
packageserver-controller-lock |
LeaderElection |
package-server-manager-75b4d49d4c-h599p_5769ea85-326e-447b-a03c-b276835fb71f became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-8ekn1l23o09kv -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
multus |
monitoring-plugin-547cc9cc49-kqs4k |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-555496955b-vpcbs |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/config has changed" | |
openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cc3977d34490059b692d5fbdb89bb9a676db39c88faa35f5d9b4e98f6b0c4e2" | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30948d73ae763e995468b7e0767b855425ccbbbef13667a2fd3ba06b3c40a165" | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-7c64dd9d8b |
SuccessfulCreate |
Created pod: packageserver-7c64dd9d8b-49skr | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-7c64dd9d8b to 1 | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-console |
replicaset-controller |
console-648d88c756 |
SuccessfulCreate |
Created pod: console-648d88c756-vswh8 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-648d88c756 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-747bdb58b5 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13" in 18.701s (18.702s including waiting). Image size: 462015571 bytes. | |
openshift-authentication |
replicaset-controller |
oauth-openshift-747bdb58b5 |
SuccessfulCreate |
Created pod: oauth-openshift-747bdb58b5-mn76f | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" in 17.838s (17.838s including waiting). Image size: 628318378 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7" in 19.046s (19.046s including waiting). Image size: 497188567 bytes. | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Upgradeable changed from Unknown to True ("All is well") | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-79f7f4d988 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-79f7f4d988 |
SuccessfulDelete |
Deleted pod: oauth-openshift-79f7f4d988-pxd4d | |
openshift-authentication |
kubelet |
oauth-openshift-79f7f4d988-pxd4d |
Killing |
Stopping container oauth-openshift | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-operator-lifecycle-manager |
multus |
packageserver-7c64dd9d8b-49skr |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
| (x3) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.28, 0 replicas available" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 5 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/config has changed" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d886210d2faa9ace5750adfc70c0c3c5512cdf492f19d1c536a446db659aabb" in 36.701s (36.701s including waiting). Image size: 2890256335 bytes. | |
openshift-console |
multus |
console-648d88c756-vswh8 |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30948d73ae763e995468b7e0767b855425ccbbbef13667a2fd3ba06b3c40a165" in 26.589s (26.589s including waiting). Image size: 442285269 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cc3977d34490059b692d5fbdb89bb9a676db39c88faa35f5d9b4e98f6b0c4e2" in 26.615s (26.615s including waiting). Image size: 465908524 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-kube-apiserver |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Started |
Started container monitoring-plugin | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4" | |
openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
Started |
Started container console | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
Started |
Started container metrics-server | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-console |
kubelet |
console-648d88c756-vswh8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Created |
Created container: packageserver | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Started |
Started container packageserver | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Created |
Created container: download-server | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Started |
Started container download-server | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
ProbeError |
Readiness probe error: Get "https://10.128.0.90:5443/healthz": dial tcp 10.128.0.90:5443: connect: connection refused body: | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.90:5443/healthz": dial tcp 10.128.0.90:5443: connect: connection refused | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-console |
kubelet |
console-648d88c756-vswh8 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-648d88c756-vswh8 |
Started |
Started container console | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78f6aebe76fa9da71b631ceced1ed159d8b60a6fa8e0325fd098c7b029039e89" | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Created |
Created container: kube-rbac-proxy | |
| (x3) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
ProbeError |
Readiness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
ProbeError |
Readiness probe error: Get "https://10.128.0.90:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.90:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
| (x3) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
ProbeError |
Liveness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" in 963ms (963ms including waiting). Image size: 407582743 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" in 1.754s (1.754s including waiting). Image size: 407582743 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7" already present on machine | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-4p4zh | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ad82327a0c3eac3d7a73ca67630eaf63bafc37514ea75cb6e8b51e995458b01" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78f6aebe76fa9da71b631ceced1ed159d8b60a6fa8e0325fd098c7b029039e89" in 14.076s (14.076s including waiting). Image size: 600181603 bytes. | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-authentication |
multus |
oauth-openshift-747bdb58b5-mn76f |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d00e4a8d28"...)}},   "controllers": []any{   ... // 8 identical elements   string("openshift.io/deploymentconfig"),   string("openshift.io/image-import"),   strings.Join({ - "-",   "openshift.io/image-puller-rolebindings",   }, ""),   string("openshift.io/image-signature-import"),   string("openshift.io/image-trigger"),   ... // 2 identical elements   string("openshift.io/origin-namespace"),   string("openshift.io/serviceaccount"),   strings.Join({ - "-",   "openshift.io/serviceaccount-pull-secrets",   }, ""),   string("openshift.io/templateinstance"),   string("openshift.io/templateinstancefinalizer"),   string("openshift.io/unidling"),   },   "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f779b92bb"...)}},   "featureGates": []any{string("BuildCSIVolumes=true")},   "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   } | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Liveness probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Created |
Created container: machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
Created |
Created container: oauth-openshift | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Started |
Started container machine-config-daemon | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
Started |
Started container oauth-openshift | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a) |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.94:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
ProbeError |
Readiness probe error: Get "https://10.128.0.94:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Started |
Started container node-ca | |
| (x8) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.") | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" : configmap "prometheus-k8s-rulefiles-0" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9." | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ad82327a0c3eac3d7a73ca67630eaf63bafc37514ea75cb6e8b51e995458b01" in 32.478s (32.478s including waiting). Image size: 476114217 bytes. | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Created |
Created container: node-ca | |
| (x8) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
BootResync |
Booting node master-0, currentConfig rendered-master-459a0309a4bacb184a38028403c86289, desiredConfig rendered-master-2072f444cb169be2ed482bc255f04f4f | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Drain |
Drain not required, skipping | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
AddSigtermProtection |
Adding SIGTERM protection | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Working | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_eb43d59a-f43f-4662-bfbd-917947e0afa2 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5c8b4c9687 to 0 from 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-667484ff5-n7qz8_a518a623-14e9-413a-a2d4-57aa5bff9888 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-84f75d5446 to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84f75d5446-j8tkx |
Killing |
Stopping container route-controller-manager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-84f75d5446 |
SuccessfulDelete |
Deleted pod: route-controller-manager-84f75d5446-j8tkx | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-78d987764b to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5c8b4c9687 |
SuccessfulDelete |
Deleted pod: controller-manager-5c8b4c9687-4pxw5 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager |
kubelet |
controller-manager-5c8b4c9687-4pxw5 |
Killing |
Stopping container controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-b5dddf8f5-kwb74_5ca370b2-b833-4ac2-a2ec-714c2181193e became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
| (x9) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
ProbeError |
Startup probe error: Get "https://10.128.0.86:8443/health": dial tcp 10.128.0.86:8443: connect: connection refused body: |
| (x9) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.86:8443/health": dial tcp 10.128.0.86:8443: connect: connection refused |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-678c7f799b to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.116.158:443/healthz\": dial tcp 172.30.116.158:443: connect: connection refused" to "All is well" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-678c7f799b |
SuccessfulCreate |
Created pod: route-controller-manager-678c7f799b-4b7nv | |
openshift-cloud-controller-manager-operator |
master-0_ba1e8533-8144-4e40-9332-d3072971f748 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_ba1e8533-8144-4e40-9332-d3072971f748 became leader | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78d987764b |
SuccessfulCreate |
Created pod: controller-manager-78d987764b-xcs5w | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d00e4a8d28"...)}},   "controllers": []any{   ... // 8 identical elements   string("openshift.io/deploymentconfig"),   string("openshift.io/image-import"),   strings.Join({ + "-",   "openshift.io/image-puller-rolebindings",   }, ""),   string("openshift.io/image-signature-import"),   string("openshift.io/image-trigger"),   ... // 2 identical elements   string("openshift.io/origin-namespace"),   string("openshift.io/serviceaccount"),   strings.Join({ + "-",   "openshift.io/serviceaccount-pull-secrets",   }, ""),   string("openshift.io/templateinstance"),   string("openshift.io/templateinstancefinalizer"),   string("openshift.io/unidling"),   },   "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f779b92bb"...)}},   "featureGates": []any{string("BuildCSIVolumes=true")},   "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   } |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed,required configmap/serviceaccount-ca has changed" | |
openshift-cloud-controller-manager-operator |
master-0_9301ef81-b09e-42d2-99cf-ba311ba9b7b1 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_9301ef81-b09e-42d2-99cf-ba311ba9b7b1 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:13.378273 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:14.145282 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:14.145369 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:14.145384 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:14.159037 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:44.159165 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:58.163300 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: icy-controller-config", (string) (len=29) "controller-manager-kubeconfig", (string) (len=38) "kube-controller-cert-syncer-kubeconfig", (string) (len=17) "serviceaccount-ca", (string) (len=10) "service-ca", (string) (len=15) "recycler-config" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "cloud-config" }, CertSecretNames: ([]string) (len=2 cap=2) { (string) (len=39) "kube-controller-manager-client-cert-key", (string) (len=10) "csr-signer" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1203 13:56:13.378273 1 cmd.go:413] Getting controller reference for node master-0 I1203 13:56:14.145282 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1203 13:56:14.145369 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1203 13:56:14.145384 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1203 13:56:14.159037 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1203 13:56:44.159165 1 cmd.go:524] Getting installer pods for node master-0 F1203 13:56:58.163300 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
| (x10) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
ProbeError |
Startup probe error: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused body: |
openshift-controller-manager |
multus |
controller-manager-78d987764b-xcs5w |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
| (x10) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
OSUpgradeSkipped |
OS upgrade skipped; new MachineConfig (rendered-master-2072f444cb169be2ed482bc255f04f4f) has same OS image (quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411b8fa606f0f605401f0a4477f7f5a3e640d42bd145fdc09b8a78272f8e6baf) as old MachineConfig (rendered-master-459a0309a4bacb184a38028403c86289) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
OSUpdateStarted |
Changing kernel arguments | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed,required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-route-controller-manager |
multus |
route-controller-manager-678c7f799b-4b7nv |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 403 body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 403 |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 403 body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [-]poststarthook/apiservice-discovery-controller failed: reason withheld [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_66c7dd28-e6d5-49a3-a632-5c9a8236e53a became leader | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
default |
kubelet |
master-0 |
Rebooted |
Node master-0 has been rebooted, boot id: 764a923e-eafb-47f4-8635-9cb972b9b445 | |
default |
kubelet |
master-0 |
NodeNotReady |
Node master-0 status is now: NodeNotReady | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : object "openshift-monitoring"/"prometheus-k8s-tls" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered | |
| (x2) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ad82327a0c3eac3d7a73ca67630eaf63bafc37514ea75cb6e8b51e995458b01" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Created |
Created container: node-exporter | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-login" : object "openshift-authentication"/"v4-0-config-user-template-login" not registered | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Created |
Created container: machine-config-server | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-apiserver"/"trusted-ca-bundle" not registered |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Started |
Started container machine-config-server | |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-error" : object "openshift-authentication"/"v4-0-config-user-template-error" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-authentication"/"audit" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-session" : object "openshift-authentication"/"v4-0-config-system-session" not registered |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-apiserver"/"encryption-config-1" not registered |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver"/"config" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" already present on machine | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered | |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-etcd-operator"/"etcd-operator-config" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-oauth-apiserver"/"serving-cert" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-oauth-apiserver"/"etcd-client" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-oauth-apiserver"/"encryption-config-1" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-oauth-apiserver"/"audit-1" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered |
| (x3) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
FailedMount |
MountVolume.SetUp failed for volume "service-ca" : object "openshift-console"/"service-ca" not registered |
| (x3) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
FailedMount |
MountVolume.SetUp failed for volume "console-oauth-config" : object "openshift-console"/"console-oauth-config" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Created |
Created container: kube-rbac-proxy | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-cbzpz" : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver"/"serving-cert" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-apiserver"/"etcd-client" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container cni-plugins | |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "audit" : object "openshift-apiserver"/"audit-1" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-apiserver"/"etcd-serving-ca" not registered |
| (x3) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
FailedMount |
MountVolume.SetUp failed for volume "oauth-serving-cert" : object "openshift-console"/"oauth-serving-cert" not registered |
| (x3) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-console"/"trusted-ca-bundle" not registered |
| (x3) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
FailedMount |
MountVolume.SetUp failed for volume "console-serving-cert" : object "openshift-console"/"console-serving-cert" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "image-import-ca" : object "openshift-apiserver"/"image-import-ca" not registered |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-kk4tm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Created |
Created container: node-ca | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-monitoring"/"alertmanager-main-generated" not registered |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Created |
Created container: kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kubecfg-setup | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Started |
Started container cluster-version-operator | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Started |
Started container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Created |
Created container: dns-node-resolver | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Created |
Created container: cluster-version-operator | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Started |
Started container kube-rbac-proxy | |
| (x2) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : object "openshift-monitoring"/"alertmanager-main-tls" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered |
| (x3) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
FailedMount |
MountVolume.SetUp failed for volume "console-serving-cert" : object "openshift-console"/"console-serving-cert" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered |
openshift-multus |
kubelet |
multus-kk4tm |
Started |
Started container kube-multus | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kubecfg-setup | |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-cgq6z" : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8wh8g" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
kubelet |
multus-kk4tm |
Created |
Created container: kube-multus | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Created |
Created container: kube-rbac-proxy | |
| (x3) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: kube-rbac-proxy | |
| (x4) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-authentication-operator"/"serving-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
| (x4) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"kube-rbac-proxy" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovn-acl-logging | |
| (x4) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
FailedMount |
MountVolume.SetUp failed for volume "signing-cabundle" : object "openshift-service-ca"/"signing-cabundle" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovn-controller | |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-config-operator"/"config-operator-serving-cert" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-tls" : object "openshift-monitoring"/"thanos-querier-tls" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-etcd-operator"/"etcd-client" not registered |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-ca" : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-f9f7f4946-48mrg became leader | |
| (x4) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
FailedMount |
MountVolume.SetUp failed for volume "console-config" : object "openshift-console"/"console-config" not registered |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-78d987764b-xcs5w |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager"/"config" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-78d987764b-xcs5w |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-controller-manager"/"client-ca" not registered |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-78d987764b-xcs5w |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-lxlb8" : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container kube-rbac-proxy | |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-service-ca" : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered |
| (x3) | openshift-console |
kubelet |
console-648d88c756-vswh8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nddv9" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x4) | openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : object "openshift-monitoring"/"metrics-server-tls" not registered |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-service-ca-operator"/"serving-cert" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926" already present on machine | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-m789m" : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered |
| (x4) | openshift-dns |
kubelet |
dns-default-5m4f8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns"/"dns-default-metrics-tls" not registered |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered |
| (x3) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-gfzrw" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
FailedMount |
MountVolume.SetUp failed for volume "service-ca" : object "openshift-console"/"service-ca" not registered |
| (x4) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
FailedMount |
MountVolume.SetUp failed for volume "console-config" : object "openshift-console"/"console-config" not registered |
| (x4) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
FailedMount |
MountVolume.SetUp failed for volume "console-oauth-config" : object "openshift-console"/"console-oauth-config" not registered |
| (x4) | openshift-console |
kubelet |
console-c5d7cd7f9-2hp75 |
FailedMount |
MountVolume.SetUp failed for volume "oauth-serving-cert" : object "openshift-console"/"oauth-serving-cert" not registered |
| (x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-678c7f799b-4b7nv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-route-controller-manager"/"serving-cert" not registered |
| (x4) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : object "openshift-catalogd"/"catalogserver-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-czfkv" : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fn7fm" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" already present on machine | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-78d987764b-xcs5w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container routeoverride-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container northd | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container nbdb | |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered |
| (x5) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-storage-version-migrator-operator"/"config" not registered |
| (x5) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered |
| (x5) | openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered |
| (x5) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-authentication-operator"/"authentication-operator-config" not registered |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_184f6c52-cd31-4d72-af3a-5af8ad8f6277 became leader | |
| (x5) | openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : object "openshift-monitoring"/"prometheus-operator-tls" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : object "openshift-machine-api"/"machine-api-operator-tls" not registered |
| (x4) | openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : object "openshift-monitoring"/"kube-state-metrics-tls" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-console-operator"/"serving-cert" not registered |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered |
| (x5) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-console-operator"/"trusted-ca" not registered |
| (x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-console-operator"/"console-operator-config" not registered |
| (x5) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
FailedMount |
MountVolume.SetUp failed for volume "signing-key" : object "openshift-service-ca"/"signing-key" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-insights"/"openshift-insights-serving-cert" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-insights"/"trusted-ca-bundle" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-insights"/"service-ca-bundle" not registered |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : object "openshift-image-registry"/"image-registry-operator-tls" not registered |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
BackOff |
Back-off restarting failed container machine-approver-controller in pod machine-approver-cb84b9cdf-qn94w_openshift-cluster-machine-approver(a9b62b2f-1e7a-4f1b-a988-4355d93dda46) |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x5) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered |
| (x5) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-image-registry"/"trusted-ca" not registered |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered |
| (x5) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns-operator"/"metrics-tls" not registered |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-ingress-operator"/"metrics-tls" not registered |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Created |
Created container: iptables-alerter | |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" already present on machine | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-678c7f799b-4b7nv |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-route-controller-manager"/"config" not registered |
| (x5) | openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
FailedMount |
MountVolume.SetUp failed for volume "monitoring-plugin-cert" : object "openshift-monitoring"/"monitoring-plugin-cert" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-678c7f799b-4b7nv |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-route-controller-manager"/"client-ca" not registered |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : object "openshift-multus"/"multus-admission-controller-secret" not registered |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : object "openshift-marketplace"/"marketplace-operator-metrics" not registered |
| (x5) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-ingress-operator"/"trusted-ca" not registered |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-trusted-ca" : object "openshift-marketplace"/"marketplace-trusted-ca" not registered |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Started |
Started container iptables-alerter | |
| (x5) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered |
| (x5) | openshift-dns |
kubelet |
dns-default-5m4f8 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-dns"/"dns-default" not registered |
| (x5) | openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered |
| (x5) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered |
| (x5) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-78d987764b-xcs5w |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : object "openshift-controller-manager"/"openshift-global-ca" not registered |
| (x5) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered |
| (x5) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-78d987764b-xcs5w |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager"/"serving-cert" not registered |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
| (x5) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered |
| (x5) | openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-ingress-canary"/"canary-serving-cert" not registered |
| (x5) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-authentication-operator"/"service-ca-bundle" not registered |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : object "openshift-monitoring"/"telemetry-config" not registered |
openshift-cluster-node-tuning-operator |
node-controller |
tuned-7zkbg |
NodeNotReady |
Node is not ready | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: whereabouts-cni | |
kube-system |
node-controller |
bootstrap-kube-controller-manager-master-0 |
NodeNotReady |
Node is not ready | |
openshift-machine-config-operator |
node-controller |
machine-config-server-pvrfs |
NodeNotReady |
Node is not ready | |
| (x5) | openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-28n2f" : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rb6pb" : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] |
openshift-dns |
node-controller |
node-resolver-4xlhs |
NodeNotReady |
Node is not ready | |
| (x5) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-92p99" : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-machine-config-operator |
node-controller |
machine-config-daemon-2ztl9 |
NodeNotReady |
Node is not ready | |
openshift-multus |
node-controller |
multus-kk4tm |
NodeNotReady |
Node is not ready | |
openshift-cloud-controller-manager-operator |
node-controller |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
NodeNotReady |
Node is not ready | |
openshift-network-operator |
node-controller |
network-operator-6cbf58c977-8lh6n |
NodeNotReady |
Node is not ready | |
openshift-monitoring |
node-controller |
node-exporter-b62gf |
NodeNotReady |
Node is not ready | |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wqkdr" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container whereabouts-cni | |
openshift-cluster-version |
node-controller |
cluster-version-operator-7c49fbfc6f-7krqx |
NodeNotReady |
Node is not ready | |
openshift-kube-apiserver |
node-controller |
kube-apiserver-master-0 |
NodeNotReady |
Node is not ready | |
openshift-ovn-kubernetes |
node-controller |
ovnkube-control-plane-f9f7f4946-48mrg |
NodeNotReady |
Node is not ready | |
openshift-machine-config-operator |
node-controller |
kube-rbac-proxy-crio-master-0 |
NodeNotReady |
Node is not ready | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-kube-apiserver |
node-controller |
kube-apiserver-startup-monitor-master-0 |
NodeNotReady |
Node is not ready | |
openshift-image-registry |
node-controller |
node-ca-4p4zh |
NodeNotReady |
Node is not ready | |
openshift-network-node-identity |
node-controller |
network-node-identity-c8csx |
NodeNotReady |
Node is not ready | |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rjbsl" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
openshift-network-operator |
node-controller |
iptables-alerter-n24qb |
NodeNotReady |
Node is not ready | |
| (x4) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tfs27" : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nrngd" : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: kube-multus-additional-cni-plugins | |
| (x4) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p5mrw" : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-678c7f799b-4b7nv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-lq4dz" : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovnkube-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: ovnkube-controller | |
| (x9) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered |
| (x7) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-route-controller-manager |
kubelet |
route-controller-manager-678c7f799b-4b7nv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered |
| (x6) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered |
| (x6) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2fns8" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
community-operators-7fwtv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zcqxx" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-c5nch" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nc9nj" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v429m" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p7ss6" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-pj4f8" : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t8knq" : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-9cnd5" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-jzlgx" : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zhc87" : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p6dpf" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nxt87" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-ltsnd" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wwv7s" : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-jkbcq" : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-x22gr" : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-ncwtx" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-lfdn2" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered |
| (x5) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bwck4" : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fw8h8" : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7q659" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-mhf9r" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5mk6r" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-djxkd" : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-jn5h6" : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] |
| (x11) | openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x14) | openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
| (x10) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-dns |
kubelet |
dns-default-5m4f8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x11) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-marketplace |
kubelet |
community-operators-7fwtv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x11) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x10) | openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
kubelet |
master-0 |
NodeReady |
Node master-0 status is now: NodeReady | |
openshift-machine-config-operator |
multus |
machine-config-controller-74cddd4fb5-phk6r |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Started |
Started container machine-config-controller | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-5bdcc987c4-x99xc |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf" already present on machine | |
openshift-image-registry |
multus |
cluster-image-registry-operator-65dc4bcb88-96zcz |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30948d73ae763e995468b7e0767b855425ccbbbef13667a2fd3ba06b3c40a165" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-dns |
multus |
dns-default-5m4f8 |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-operator-664c9d94c9-9vfr4 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d" already present on machine | |
openshift-multus |
multus |
network-metrics-daemon-ch7xd |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
multus |
openshift-state-metrics-57cbc648f8-q4cgg |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Started |
Started container dns | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Created |
Created container: machine-config-controller | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Created |
Created container: dns | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-monitoring |
multus |
monitoring-plugin-547cc9cc49-kqs4k |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Created |
Created container: kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
multus |
prometheus-operator-565bdcb8-477pk |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: kube-rbac-proxy-self | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Created |
Created container: network-metrics-daemon | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e39fd49a8aa33e4b750267b4e773492b85c08cc7830cd7b22e64a92bcb5b6729" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Created |
Created container: multus-admission-controller | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Created |
Created container: cluster-image-registry-operator | |
openshift-monitoring |
multus |
kube-state-metrics-7dcc7f9bd6-68wml |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container kube-rbac-proxy-main | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
cluster-monitoring-operator-69cc794c58-mfjk2 |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: kube-rbac-proxy-main | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Created |
Created container: machine-config-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Started |
Started container cluster-image-registry-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Started |
Started container machine-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:903557bdbb44cf720481cc9b305a8060f327435d303c95e710b92669ff43d055" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
multus |
thanos-querier-cc996c4bd-j4hzr |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f870aa3c7bcd039c7905b2c7a9e9c0776d76ed4cf34ccbef872ae7ad8cf2157f" already present on machine | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0737727dcbfb50c3c09b69684ba3c07b5a4ab7652bbe4970a46d6a11c4a2bca" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
multus |
metrics-server-555496955b-vpcbs |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Created |
Created container: cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4e0b20fdb38d516e871ff5d593c4273cc9933cb6a65ec93e727ca4a7777fd20" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Started |
Started container cluster-monitoring-operator | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-6d64b47964-jjd7h |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-insights |
multus |
insights-operator-59d99f9b7b-74sss |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-86897dd478-qqwh7 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-diagnostics |
multus |
network-check-source-6964bb78b7-g4lv2 |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03" already present on machine | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-f84784664-ntb9w |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-authentication |
multus |
oauth-openshift-747bdb58b5-mn76f |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-authentication-operator |
multus |
authentication-operator-7479ffdf48-hpdzl |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
machine-api-operator-7486ff55f-wcnxg |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-56f5898f45-fhnc5 |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-baremetal-operator-5fdc576499-j2n8j |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" already present on machine | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" already present on machine | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" already present on machine | |
openshift-kube-storage-version-migrator |
multus |
migrator-5bcf58cf9c-dvklg |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-service-ca |
multus |
service-ca-6b8bb995f7-b68p8 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
marketplace-operator-7d67745bb7-dwcxb |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-network-diagnostics |
multus |
network-check-target-pcchm |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" already present on machine | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7b795784b8-44frm |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" already present on machine | |
openshift-ingress-canary |
multus |
ingress-canary-vkpv4 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-7f88444875-6dk29 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-5b557b5f57-s5s96 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-marketplace |
multus |
certified-operators-t8rt7 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Started |
Started container prometheus-operator | |
openshift-operator-lifecycle-manager |
multus |
packageserver-7c64dd9d8b-49skr |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" already present on machine | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-apiserver |
multus |
apiserver-6985f84b49-v9vlg |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Created |
Created container: openshift-controller-manager-operator | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container extract-utilities | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-86897dd478-qqwh7 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-86897dd478-qqwh7 became leader | |
openshift-catalogd |
multus |
catalogd-controller-manager-754cfd84-qf898 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-controller-manager |
multus |
controller-manager-78d987764b-xcs5w |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-console |
multus |
console-648d88c756-vswh8 |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Created |
Created container: check-endpoints | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-5f78c89466-bshxw |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-589f5cdc9d-5h2kn |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-config-operator |
multus |
openshift-config-operator-68c95b6cf5-fmdmz |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-7978bf889c-n64v4 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e0e3400f1cb68a205bfb841b6b1a78045e7d80703830aa64979d46418d19c835" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-678c7f799b-4b7nv |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-6z4sc |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Started |
Started container check-endpoints | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-ddwmn |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Started |
Started container cluster-storage-operator | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-76bd5d69c7-fjrrg |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Created |
Created container: cluster-storage-operator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" already present on machine | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" already present on machine | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Created |
Created container: service-ca-operator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Started |
Started container migrator | |
openshift-dns-operator |
multus |
dns-operator-6b7bcd6566-jh9m8 |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
multus |
apiserver-57fd58bc7b-kktql |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-marketplace |
multus |
community-operators-7fwtv |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Created |
Created container: migrator | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-7c4dc67499-tjwg8 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Created |
Created container: serve-healthcheck-canary | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-7cf5cf757f-zgm6l |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-66f4cc99d4-x278n |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-console-operator |
multus |
console-operator-77df56447c-vsrxx |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-667484ff5-n7qz8 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ingress-operator |
multus |
ingress-operator-85dbd94574-8jfp5 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Started |
Started container csi-snapshot-controller-operator | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-75b4d49d4c-h599p |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-console |
multus |
console-c5d7cd7f9-2hp75 |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-console |
multus |
downloads-6f5db8559b-96ljh |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Created |
Created container: network-check-target-container | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d886210d2faa9ace5750adfc70c0c3c5512cdf492f19d1c536a446db659aabb" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Started |
Started container service-ca-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Created |
Created container: cluster-samples-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Created |
Created container: kube-apiserver-operator | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Started |
Started container network-check-target-container | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Started |
Started container serve-healthcheck-canary | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" already present on machine | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Started |
Started container cluster-samples-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Started |
Started container catalog-operator | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container extract-utilities | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Created |
Created container: catalog-operator | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: extract-utilities | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Started |
Started container graceful-termination | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Started |
Started container kube-rbac-proxy | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Created |
Created container: graceful-termination | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Created |
Created container: kube-storage-version-migrator-operator | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Created |
Created container: kube-controller-manager-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Created |
Created container: openshift-api | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Created |
Created container: cluster-samples-operator-watch | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: copy-catalogd-manifests | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: extract-utilities | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Created |
Created container: dns-operator | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Created |
Created container: download-server | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: extract-utilities | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Created |
Created container: openshift-config-operator | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Created |
Created container: package-server-manager | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container copy-catalogd-manifests | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Started |
Started container download-server | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 2.412s (2.412s including waiting). Image size: 1204969293 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container extract-utilities | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Started |
Started container cluster-samples-operator-watch | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-678c7f799b-4b7nv_c841bc60-7211-4fd2-8f02-c1b7a7cfe287 became leader | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container copy-operator-controller-manifests | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-78d987764b-xcs5w became leader | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Started |
Started container manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Started |
Started container openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Started |
Started container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Started |
Started container package-server-manager | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: extract-content | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Created |
Created container: kube-rbac-proxy | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container extract-content | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
ProbeError |
Liveness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: | |
| (x2) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
ProbeError |
Readiness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.74s (1.74s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.811s (1.811s including waiting). Image size: 1201319250 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Started |
Started container kube-rbac-proxy | |
| (x2) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 2.454s (2.454s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 2.358s (2.358s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 2.399s (2.399s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 26.746s (26.746s including waiting). Image size: 1609963837 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 403ms (403ms including waiting). Image size: 912736453 bytes. | |
openshift-network-node-identity |
master-0_5afed25e-12f6-4185-ae67-dfd1fe63e5fc |
ovnkube-identity |
LeaderElection |
master-0_5afed25e-12f6-4185-ae67-dfd1fe63e5fc became leader | |
openshift-console |
replicaset-controller |
console-c5d7cd7f9 |
SuccessfulDelete |
Deleted pod: console-c5d7cd7f9-2hp75 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-c5d7cd7f9 to 0 from 1 | |
openshift-cloud-controller-manager-operator |
master-0_e26e83aa-a388-47cc-9a20-7941572ab699 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_e26e83aa-a388-47cc-9a20-7941572ab699 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_e823bbff-d55f-42da-a34a-3a7ee1f32728 became leader | |
openshift-marketplace |
default-scheduler |
community-operators-qb87p |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-qb87p to master-0 | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-vjtzs |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-vjtzs to master-0 | |
openshift-marketplace |
default-scheduler |
certified-operators-2ts27 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-2ts27 to master-0 | |
openshift-marketplace |
default-scheduler |
redhat-operators-5zhkp |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-5zhkp to master-0 | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
multus |
redhat-marketplace-vjtzs |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
multus |
community-operators-qb87p |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
redhat-operators-5zhkp |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
certified-operators-2ts27 |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 872ms (872ms including waiting). Image size: 1609963837 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 847ms (847ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 629ms (629ms including waiting). Image size: 1204969293 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 826ms (826ms including waiting). Image size: 1201319250 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 621ms (621ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 412ms (412ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 469ms (470ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 415ms (415ms including waiting). Image size: 912736453 bytes. | |
openshift-cloud-controller-manager-operator |
master-0_8c50d308-ea0b-426b-8188-82b3c3a6e4b8 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_8c50d308-ea0b-426b-8188-82b3c3a6e4b8 became leader | |
openshift-marketplace |
kubelet |
certified-operators-2ts27 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-qb87p |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-5zhkp |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-vjtzs |
Killing |
Stopping container registry-server | |
openshift-cluster-machine-approver |
master-0_1513e030-68a8-4e3a-8b44-362970fb4908 |
cluster-machine-approver-leader |
LeaderElection |
master-0_1513e030-68a8-4e3a-8b44-362970fb4908 became leader | |
openshift-machine-api |
control-plane-machine-set-operator-66f4cc99d4-x278n_a21b99a4-8c0b-4b61-a87e-5e91ccf576a0 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-66f4cc99d4-x278n_a21b99a4-8c0b-4b61-a87e-5e91ccf576a0 became leader | |
openshift-machine-api |
cluster-baremetal-operator-5fdc576499-j2n8j_72508d57-61bc-4c15-985d-2738d9df77ea |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-5fdc576499-j2n8j_72508d57-61bc-4c15-985d-2738d9df77ea became leader | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-catalogd |
catalogd-controller-manager-754cfd84-qf898_4b1161a7-f2a9-4e91-803b-ebd9485116b2 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-754cfd84-qf898_4b1161a7-f2a9-4e91-803b-ebd9485116b2 became leader | |
openshift-operator-controller |
operator-controller-controller-manager-5f78c89466-bshxw_541d6d2c-e645-42b5-aba8-9835499a3484 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-5f78c89466-bshxw_541d6d2c-e645-42b5-aba8-9835499a3484 became leader | |
| (x9) | openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_82443010-2039-484f-96a0-cfcf6bd509d4 became leader | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-58nng | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_d20ab31f-84b8-44ec-a4e8-c4cb2919960a became leader | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-58nng |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-58nng to master-0 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-58nng |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-58nng |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-58nng |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-58nng |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-56c9b9fa8d9gs -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
telemeter-client-764cbf5554-kftwv |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-764cbf5554-kftwv to master-0 | |
openshift-monitoring |
replicaset-controller |
telemeter-client-764cbf5554 |
SuccessfulCreate |
Created pod: telemeter-client-764cbf5554-kftwv | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-764cbf5554 to 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-84c998f64f to 1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-84c998f64f-8stq7 |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-84c998f64f-8stq7 to master-0 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-84c998f64f |
SuccessfulCreate |
Created pod: multus-admission-controller-84c998f64f-8stq7 | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Created |
Created container: multus-admission-controller | |
openshift-multus |
multus |
multus-admission-controller-84c998f64f-8stq7 |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-5bdcc987c4 to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5bdcc987c4 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-5bdcc987c4-x99xc | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5bdcc987c4-x99xc |
Killing |
Stopping container kube-rbac-proxy | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: status.relatedObjects changed from [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}] | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-589f5cdc9d-5h2kn_ceb1d90e-1ea4-4679-b0d7-030b784087ea became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-56f5898f45-fhnc5_747be228-c5a2-4f53-82c3-79aaf8ff7e31 became leader | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-7c696657b7 to 1 | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-7c696657b7 |
SuccessfulCreate |
Created pod: networking-console-plugin-7c696657b7-452tx | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-network-console |
default-scheduler |
networking-console-plugin-7c696657b7-452tx |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-7c696657b7-452tx to master-0 | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-58nng |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-5b557b5f57-s5s96_2c9ad02d-67bb-419a-8e3b-e2bc367169ff became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-74cff6cf84 |
SuccessfulCreate |
Created pod: route-controller-manager-74cff6cf84-bh8rz | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78d987764b |
SuccessfulDelete |
Deleted pod: controller-manager-78d987764b-xcs5w | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-678c7f799b |
SuccessfulDelete |
Deleted pod: route-controller-manager-678c7f799b-4b7nv | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7d7ddcf759 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-78d987764b to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d7ddcf759 |
SuccessfulCreate |
Created pod: controller-manager-7d7ddcf759-pvkrm | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-7c4697b5f5-9f69p_740e0925-709f-4eda-b09e-f07c9020432e became leader | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-74cff6cf84 to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-678c7f799b to 0 from 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 4, desired generation is 5.",Available changed from False to True ("All is well") | |
openshift-controller-manager |
default-scheduler |
controller-manager-7d7ddcf759-pvkrm |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
kubelet |
controller-manager-78d987764b-xcs5w |
Killing |
Stopping container controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-74cff6cf84-bh8rz |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-678c7f799b-4b7nv |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager |
default-scheduler |
controller-manager-7d7ddcf759-pvkrm |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm to master-0 | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7d7ddcf759-pvkrm became leader | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-74cff6cf84-bh8rz |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz to master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 4, desired generation is 5." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-controller-manager |
multus |
controller-manager-7d7ddcf759-pvkrm |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
multus |
route-controller-manager-74cff6cf84-bh8rz |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-74cff6cf84-bh8rz_208bb430-a78d-4dcc-9667-6bf09de56288 became leader | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-7479ffdf48-hpdzl_08e46fba-98d1-4d8d-9c1c-d96cbbcc183f became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
Started |
Started container route-controller-manager | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no pods available on any node." to "Available: no route controller manager deployment pods available on any node." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"94dc1c25-2c73-4734-8e8a-55c14c29fe7c\", ResourceVersion:\"13812\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 3, 13, 44, 2, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 3, 13, 58, 36, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0056be5b8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"94dc1c25-2c73-4734-8e8a-55c14c29fe7c\", ResourceVersion:\"13812\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 3, 13, 44, 2, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 3, 13, 58, 36, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0056be5b8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node." | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-65dc4bcb88-96zcz_0f8207da-e759-4d49-bddc-33ec8b661233 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-b5dddf8f5-kwb74_569ea2de-8801-494b-aebf-67eee3efcc78 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 13:56:13.378273 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 13:56:14.145282 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 13:56:14.145369 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 13:56:14.145384 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 13:56:14.159037 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 13:56:44.159165 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 13:56:58.163300 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-77df56447c-vsrxx_c2af8e12-c4bf-47d4-a0ff-21dd596ac61a became leader | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "DeploymentAvailable: 0 replicas available for console deployment" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"),status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-console |
default-scheduler |
console-59fc685495-qcxmz |
Scheduled |
Successfully assigned openshift-console/console-59fc685495-qcxmz to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-console |
replicaset-controller |
console-59fc685495 |
SuccessfulCreate |
Created pod: console-59fc685495-qcxmz | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-59fc685495 to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-console |
kubelet |
console-59fc685495-qcxmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.28, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected",Available changed from False to True ("All is well") | |
openshift-console |
multus |
console-59fc685495-qcxmz |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-console |
kubelet |
console-59fc685495-qcxmz |
Created |
Created container: console | |
openshift-console |
kubelet |
console-59fc685495-qcxmz |
Started |
Started container console | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-68c95b6cf5-fmdmz_cd873c7e-a586-42e3-867a-8ffede804784 became leader | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-machine-api |
cluster-autoscaler-operator-7f88444875-6dk29_90c21f17-5bb6-414c-b953-2f92764064c5 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-7f88444875-6dk29_90c21f17-5bb6-414c-b953-2f92764064c5 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz_456624fd-40a8-4917-8f5f-9ecced0dbb92 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-667484ff5-n7qz8_cb86f1a8-ab90-4146-877c-8b36a1968c89 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/service-account-private-key has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
| (x7) | openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
FailedMount |
MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found |
openshift-operator-lifecycle-manager |
package-server-manager-75b4d49d4c-h599p_394eaa1c-5628-43f7-82be-44e4b330163a |
packageserver-controller-lock |
LeaderElection |
package-server-manager-75b4d49d4c-h599p_394eaa1c-5628-43f7-82be-44e4b330163a became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-console |
kubelet |
console-648d88c756-vswh8 |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-648d88c756 |
SuccessfulDelete |
Deleted pod: console-648d88c756-vswh8 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-648d88c756 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.28, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.28, 2 replicas available" | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-5f574c6c79-86bh9_3330e229-7f62-4797-befb-997655103060 became leader | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-7978bf889c-n64v4_5ed5b3af-32b8-464c-80cc-a84c09c6cfc2 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b795784b8-44frm_90be48d1-b797-4ccf-a0a5-ab126fbbe211 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "etcd" changed from "" to "4.18.28" | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Started |
Started container machine-config-daemon |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Created |
Created container: machine-config-daemon |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted |
| (x8) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : secret "telemeter-client-tls" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 1 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-6b8bb995f7-b68p8_b5a2cecd-6304-4679-84fd-0c4527e56380 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_8249522d-a202-45e8-b722-b27930359d92 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-etcd |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64" | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-console |
multus |
networking-console-plugin-7c696657b7-452tx |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25b69045d961dc26719bc4cbb3a854737938b6e97375c04197e9cbc932541b17" | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25b69045d961dc26719bc4cbb3a854737938b6e97375c04197e9cbc932541b17" in 3.237s (3.237s including waiting). Image size: 440967902 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
Started |
Started container networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
Created |
Created container: networking-console-plugin | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412855 |
SuccessfulCreate |
Created pod: collect-profiles-29412855-jmbvv | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29412855-jmbvv |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv to master-0 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29412855 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412855-jmbvv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29412855-jmbvv |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412855-jmbvv |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412855-jmbvv |
Created |
Created container: collect-profiles | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bbd9b9dff-rrfsm_0d79b901-5d37-4ea5-9024-71e5bb1f3ae9 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-bbd9b9dff-rrfsm_0d79b901-5d37-4ea5-9024-71e5bb1f3ae9 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_658a957b-eef6-4175-b580-48a4d0e3aef9 became leader | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped | |
| (x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28b3ba29ff038781d3742df4ab05fac69a92cf2bf058c25487e47a2f4ff02627" | |
openshift-monitoring |
multus |
telemeter-client-764cbf5554-kftwv |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28b3ba29ff038781d3742df4ab05fac69a92cf2bf058c25487e47a2f4ff02627" in 3.772s (3.772s including waiting). Image size: 475010905 bytes. | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_5a7800e0-412d-4a95-a105-30ca469027fc became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29412855, condition: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412855 |
Completed |
Job completed | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Created |
Created container: telemeter-client | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"94dc1c25-2c73-4734-8e8a-55c14c29fe7c\", ResourceVersion:\"18825\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 3, 13, 44, 2, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 3, 14, 8, 11, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002d88030), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
SkipReboot |
Config changes do not require reboot. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: (string) (len=15) "recycler-config" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "cloud-config" }, CertSecretNames: ([]string) (len=2 cap=2) { (string) (len=39) "kube-controller-manager-client-cert-key", (string) (len=10) "csr-signer" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1203 14:14:25.545716 1 cmd.go:413] Getting controller reference for node master-0 I1203 14:14:25.556712 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1203 14:14:25.556806 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1203 14:14:25.556822 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1203 14:14:25.560206 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I1203 14:14:35.565212 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I1203 14:14:45.565031 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1203 14:15:15.565644 1 cmd.go:524] Getting installer pods for node master-0 F1203 14:15:15.569386 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-2072f444cb169be2ed482bc255f04f4f |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-2072f444cb169be2ed482bc255f04f4f to Done |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-2072f444cb169be2ed482bc255f04f4f and node has been uncordoned |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 14:14:25.545716 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556712 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556806 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.556822 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.560206 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:35.565212 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:45.565031 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 14:15:15.565644 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 14:15:15.569386 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
installer errors: installer: "", Namespace: (string) (len=14) "openshift-etcd", Clock: (clock.RealClock) { }, PodConfigMapNamePrefix: (string) (len=8) "etcd-pod", SecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=14) "etcd-all-certs" }, OptionalSecretNamePrefixes: ([]string) <nil>, ConfigMapNamePrefixes: ([]string) (len=3 cap=4) { (string) (len=8) "etcd-pod", (string) (len=14) "etcd-endpoints", (string) (len=16) "etcd-all-bundles" }, OptionalConfigMapNamePrefixes: ([]string) <nil>, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=14) "etcd-all-certs" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=3 cap=4) { (string) (len=16) "restore-etcd-pod", (string) (len=12) "etcd-scripts", (string) (len=16) "etcd-all-bundles" }, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=47) "/etc/kubernetes/static-pod-resources/etcd-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1203 14:14:49.547412 1 cmd.go:413] Getting controller reference for node master-0 I1203 14:14:49.561840 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1203 14:14:49.561924 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1203 14:14:49.561938 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1203 14:14:49.565381 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1203 14:15:19.565943 1 cmd.go:524] Getting installer pods for node master-0 F1203 14:15:19.567325 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 4 to 5 because static pod is ready | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node master-0 to MachineConfig: rendered-master-459a0309a4bacb184a38028403c86289 | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-459a0309a4bacb184a38028403c86289 | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78f6aebe76fa9da71b631ceced1ed159d8b60a6fa8e0325fd098c7b029039e89" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-3-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-retry-1-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-2-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
multus |
installer-2-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-2-retry-1-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-2-retry-1-master-0 |
Created |
Created container: installer | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6c9c84854 to 1 | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again | |
openshift-console |
replicaset-controller |
console-6c9c84854 |
SuccessfulCreate |
Created pod: console-6c9c84854-xf7nv | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well") | |
openshift-console |
default-scheduler |
console-6c9c84854-xf7nv |
Scheduled |
Successfully assigned openshift-console/console-6c9c84854-xf7nv to master-0 | |
openshift-console |
kubelet |
console-6c9c84854-xf7nv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine | |
openshift-console |
multus |
console-6c9c84854-xf7nv |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-6c9c84854-xf7nv |
Started |
Started container console | |
openshift-console |
kubelet |
console-6c9c84854-xf7nv |
Created |
Created container: console | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.28"}] to [{"raw-internal" "4.18.28"} {"kube-controller-manager" "1.31.13"} {"operator" "4.18.28"}] | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 14:14:25.545716 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556712 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556806 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.556822 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.560206 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:35.565212 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:45.565031 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 14:15:15.565644 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 14:15:15.569386 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 14:14:25.545716 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556712 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556806 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.556822 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.560206 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:35.565212 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:45.565031 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 14:15:15.565644 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 14:15:15.569386 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-59fc685495 to 0 from 1 | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
openshift-console |
replicaset-controller |
console-59fc685495 |
SuccessfulDelete |
Deleted pod: console-59fc685495-qcxmz | |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.28" |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.13" |
openshift-console |
kubelet |
console-59fc685495-qcxmz |
Killing |
Stopping container console | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
| (x8) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_aac63a8c-dbd0-41a2-b520-9c6adbd2f0a3 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
| (x8) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 14:14:25.545716 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556712 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556806 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.556822 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.560206 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:35.565212 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:45.565031 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 14:15:15.565644 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 14:15:15.569386 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 14:14:25.545716 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556712 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556806 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.556822 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.560206 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:35.565212 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:45.565031 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 14:15:15.565644 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 14:15:15.569386 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " | |
openshift-etcd |
kubelet |
etcd-master-0 |
Killing |
Stopping container etcdctl | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Started |
Started container approver |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Created |
Created container: approver |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
ProbeError |
Readiness probe error: Get "http://10.128.0.33:8081/readyz": dial tcp 10.128.0.33:8081: connect: connection refused body: | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.33:8081/readyz": dial tcp 10.128.0.33:8081: connect: connection refused | |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601" already present on machine |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Created |
Created container: marketplace-operator |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" already present on machine |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Created |
Created container: manager |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: config-sync-controllers |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container config-sync-controllers |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Created |
Created container: control-plane-machine-set-operator |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Started |
Started container control-plane-machine-set-operator |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Created |
Created container: cluster-baremetal-operator |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e" already present on machine |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Started |
Started container ovnkube-cluster-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-f9f7f4946-48mrg |
Created |
Created container: ovnkube-cluster-manager |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
| (x4) | openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Started |
Started container machine-approver-controller |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Created |
Created container: machine-approver-controller |
| (x4) | openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1" already present on machine |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine |
openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.34:8443/healthz": dial tcp 10.128.0.34:8443: connect: connection refused | |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Started |
Started container snapshot-controller |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Created |
Created container: snapshot-controller |
openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
ProbeError |
Readiness probe error: Get "https://10.128.0.34:8443/healthz": dial tcp 10.128.0.34:8443: connect: connection refused body: | |
openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.34:8443/healthz": dial tcp 10.128.0.34:8443: connect: connection refused | |
openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
ProbeError |
Liveness probe error: Get "https://10.128.0.34:8443/healthz": dial tcp 10.128.0.34:8443: connect: connection refused body: | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
Started |
Started container controller-manager |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
Created |
Created container: controller-manager |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "PDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "ServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "PDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded",Upgradeable changed from True to False ("DownloadsCustomRouteSyncUpgradeable: Timeout: request did not complete within requested timeout - context deadline exceeded") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)" | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: \"\",\nNodeInstallerDegraded: Namespace: (string) (len=14) \"openshift-etcd\",\nNodeInstallerDegraded: Clock: (clock.RealClock) {\nNodeInstallerDegraded: },\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=8) \"etcd-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=14) \"etcd-all-certs\"\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=3 cap=4) {\nNodeInstallerDegraded: (string) (len=8) \"etcd-pod\",\nNodeInstallerDegraded: (string) (len=14) \"etcd-endpoints\",\nNodeInstallerDegraded: (string) (len=16) \"etcd-all-bundles\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=3 cap=4) {\nNodeInstallerDegraded: (string) (len=16) \"restore-etcd-pod\",\nNodeInstallerDegraded: (string) (len=12) \"etcd-scripts\",\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=47) \"/etc/kubernetes/static-pod-resources/etcd-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 14:14:49.547412 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 14:14:49.561840 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 14:14:49.561924 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 14:14:49.561938 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 14:14:49.565381 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 14:15:19.565943 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 14:15:19.567325 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: ") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1203 14:14:25.545716 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556712 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1203 14:14:25.556806 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.556822 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1203 14:14:25.560206 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:35.565212 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I1203 14:14:45.565031 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1203 14:15:15.565644 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1203 14:15:15.569386 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: ") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "DownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "ServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
AddSigtermProtection |
Adding SIGTERM protection |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Degraded | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Drain |
Drain not required, skipping | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Working | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason=error setting node's state to Working: unable to update node "&Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{},Features:nil,},}": Timeout: request did not complete within requested timeout - context deadline exceeded | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.28, 1 replicas available" |
| (x3) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
| (x10) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/openshift-oauth-apiserver: could not be retrieved"),Available changed from True to False ("APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 1 to 2 because static pod is ready | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2" already present on machine |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" to "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" to "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nOAuthSessionSecretDegraded: Failed to apply session secret \"v4-0-config-system-session\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets v4-0-config-system-session)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.apps.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.authorization.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.build.openshift.io)]") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-f9f7f4946-48mrg became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-f84784664-ntb9w_d2cae3a4-2f7e-431b-8daf-ef19d812a011 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7d7ddcf759-pvkrm became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" to "All is well" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well",Upgradeable changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_45f56424-93c7-4918-871d-756653763aa6 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest not ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-86897dd478-qqwh7_openshift-cluster-storage-operator(63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831" already present on machine |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
OSUpgradeSkipped |
OS upgrade skipped; new MachineConfig (rendered-master-459a0309a4bacb184a38028403c86289) has same OS image (quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411b8fa606f0f605401f0a4477f7f5a3e640d42bd145fdc09b8a78272f8e6baf) as old MachineConfig (rendered-master-2072f444cb169be2ed482bc255f04f4f) | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-86897dd478-qqwh7 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-86897dd478-qqwh7 became leader | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
OSUpdateStarted |
Changing kernel arguments | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
RemoveSigtermProtection |
Removing SIGTERM protection |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Reboot |
Node will reboot into config rendered-master-459a0309a4bacb184a38028403c86289 | |
default |
machineconfigdaemon |
master-0 |
OSUpdateStaged |
Changes to OS staged | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 403 |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 403 body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [-]poststarthook/apiservice-discovery-controller failed: reason withheld [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-f9f7f4946-48mrg became leader | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered | |
openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : object "openshift-monitoring"/"prometheus-k8s-tls" not registered | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-error" : object "openshift-authentication"/"v4-0-config-user-template-error" not registered | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-authentication"/"audit" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered | |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-login" : object "openshift-authentication"/"v4-0-config-user-template-login" not registered |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" already present on machine | |
| (x5) | default |
kubelet |
master-0 |
NodeNotReady |
Node master-0 status is now: NodeNotReady |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Started |
Started container node-ca | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container cni-plugins | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: cni-plugins | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Created |
Created container: network-operator | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Created |
Created container: node-exporter | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v7d88" : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda" already present on machine | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-session" : object "openshift-authentication"/"v4-0-config-system-session" not registered | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-4p4zh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ad82327a0c3eac3d7a73ca67630eaf63bafc37514ea75cb6e8b51e995458b01" already present on machine | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-cbzpz" : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Started |
Started container webhook | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Created |
Created container: machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-pvrfs |
Started |
Started container machine-config-server | |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered | |
openshift-multus |
kubelet |
multus-kk4tm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
openshift-multus |
kubelet |
multus-kk4tm |
Created |
Created container: kube-multus | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-apiserver"/"encryption-config-1" not registered |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Created |
Created container: approver | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-apiserver"/"etcd-serving-ca" not registered |
| (x5) | default |
kubelet |
master-0 |
Rebooted |
Node master-0 has been rebooted, boot id: 5a54df78-64a7-4b65-a168-d6e871bf4ce7 |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered |
| (x3) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zhc87" : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kubecfg-setup | |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tfs27" : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
| (x3) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nc9nj" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Started |
Started container cluster-version-operator | |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-oauth-apiserver"/"etcd-client" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-oauth-apiserver"/"serving-cert" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Created |
Created container: dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-4xlhs |
Started |
Started container dns-node-resolver | |
| (x2) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "image-import-ca" : object "openshift-apiserver"/"image-import-ca" not registered |
| (x3) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rjbsl" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-n798x" : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-ncwtx" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
openshift-network-node-identity |
kubelet |
network-node-identity-c8csx |
Started |
Started container approver | |
openshift-multus |
kubelet |
multus-kk4tm |
Started |
Started container kube-multus | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nxt87" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "audit" : object "openshift-apiserver"/"audit-1" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver"/"serving-cert" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : object "openshift-monitoring"/"alertmanager-main-tls" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered |
| (x3) | openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-28n2f" : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-monitoring"/"alertmanager-main-generated" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rb6pb" : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fw8h8" : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-jzlgx" : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver"/"config" not registered |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: config-sync-controllers | |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-cgq6z" : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-apiserver"/"etcd-client" not registered |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t8knq" : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
openshift-cluster-version |
kubelet |
cluster-version-operator-7c49fbfc6f-7krqx |
Created |
Created container: cluster-version-operator | |
openshift-network-operator |
kubelet |
network-operator-6cbf58c977-8lh6n |
Started |
Started container network-operator | |
| (x3) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-c5nch" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Started |
Started container kube-rbac-proxy | |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered |
| (x3) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-czfkv" : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-pj4f8" : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-lfdn2" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2fns8" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7q659" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered |
| (x3) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-jn5h6" : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6985f84b49-v9vlg |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-apiserver"/"trusted-ca-bundle" not registered |
| (x2) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-tls" : object "openshift-monitoring"/"thanos-querier-tls" not registered |
| (x3) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nrngd" : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p7ss6" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wqkdr" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-jkbcq" : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-d8bbn" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
FailedMount |
MountVolume.SetUp failed for volume "service-ca" : object "openshift-console"/"service-ca" not registered |
| (x3) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
FailedMount |
MountVolume.SetUp failed for volume "oauth-serving-cert" : object "openshift-console"/"oauth-serving-cert" not registered |
| (x3) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
FailedMount |
MountVolume.SetUp failed for volume "console-serving-cert" : object "openshift-console"/"console-serving-cert" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-b62gf |
Started |
Started container kube-rbac-proxy | |
| (x3) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
FailedMount |
MountVolume.SetUp failed for volume "console-config" : object "openshift-console"/"console-config" not registered |
| (x3) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered |
| (x3) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p5mrw" : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container bond-cni-plugin | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-m789m" : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: bond-cni-plugin | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq |
Started |
Started container config-sync-controllers | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: ovn-controller | |
| (x4) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : object "openshift-catalogd"/"catalogserver-cert" not registered |
| (x4) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-service-ca-operator"/"serving-cert" not registered |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
FailedMount |
MountVolume.SetUp failed for volume "monitoring-plugin-cert" : object "openshift-monitoring"/"monitoring-plugin-cert" not registered |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered |
| (x4) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-console"/"trusted-ca-bundle" not registered |
| (x4) | openshift-console |
kubelet |
console-6c9c84854-xf7nv |
FailedMount |
MountVolume.SetUp failed for volume "console-oauth-config" : object "openshift-console"/"console-oauth-config" not registered |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : object "openshift-monitoring"/"metrics-server-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered |
| (x4) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered |
| (x4) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : object "openshift-monitoring"/"kube-state-metrics-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-cc996c4bd-j4hzr |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered |
| (x4) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
FailedMount |
MountVolume.SetUp failed for volume "signing-key" : object "openshift-service-ca"/"signing-key" not registered |
| (x4) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
FailedMount |
MountVolume.SetUp failed for volume "signing-cabundle" : object "openshift-service-ca"/"signing-cabundle" not registered |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : object "openshift-multus"/"multus-admission-controller-secret" not registered |
| (x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-route-controller-manager"/"config" not registered |
| (x4) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-console-operator"/"trusted-ca" not registered |
| (x4) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-console-operator"/"serving-cert" not registered |
| (x4) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-console-operator"/"console-operator-config" not registered |
| (x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-route-controller-manager"/"client-ca" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container nbdb | |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container northd | |
| (x4) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine | |
| (x4) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : object "openshift-monitoring"/"telemetry-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-controller-manager"/"client-ca" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager"/"config" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager"/"serving-cert" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : object "openshift-controller-manager"/"openshift-global-ca" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container kube-rbac-proxy-node | |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
FailedMount |
MountVolume.SetUp failed for volume "federate-client-tls" : object "openshift-monitoring"/"federate-client-certs" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : object "openshift-monitoring"/"telemeter-client-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-764cbf5554-kftwv |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : object "openshift-monitoring"/"telemeter-client" not registered |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovn-acl-logging | |
| (x4) | openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Created |
Created container: ovn-acl-logging | |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-storage-version-migrator-operator"/"config" not registered |
| (x4) | openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
FailedMount |
MountVolume.SetUp failed for volume "networking-console-plugin-cert" : object "openshift-network-console"/"networking-console-plugin-cert" not registered |
| (x4) | openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
FailedMount |
MountVolume.SetUp failed for volume "nginx-conf" : object "openshift-network-console"/"networking-console-plugin" not registered |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered |
| (x4) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered |
| (x4) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered |
| (x4) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-insights"/"openshift-insights-serving-cert" not registered |
| (x4) | openshift-dns |
kubelet |
dns-default-5m4f8 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-dns"/"dns-default" not registered |
| (x4) | openshift-dns |
kubelet |
dns-default-5m4f8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns"/"dns-default-metrics-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-config-operator"/"config-operator-serving-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered |
| (x4) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-insights"/"service-ca-bundle" not registered |
| (x4) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-insights"/"trusted-ca-bundle" not registered |
| (x4) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-ingress-operator"/"trusted-ca" not registered |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-ingress-operator"/"metrics-tls" not registered |
| (x4) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"kube-rbac-proxy" not registered |
| (x4) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : object "openshift-machine-api"/"machine-api-operator-tls" not registered |
| (x4) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered |
| (x4) | openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-ingress-canary"/"canary-serving-cert" not registered |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-trusted-ca" : object "openshift-marketplace"/"marketplace-trusted-ca" not registered |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : object "openshift-marketplace"/"marketplace-operator-metrics" not registered |
| (x4) | openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : object "openshift-monitoring"/"prometheus-operator-tls" not registered |
| (x4) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns-operator"/"metrics-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Created |
Created container: iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-n24qb |
Started |
Started container iptables-alerter | |
| (x4) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered |
| (x4) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered |
| (x4) | openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
| (x4) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : object "openshift-image-registry"/"image-registry-operator-tls" not registered |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-oauth-apiserver"/"encryption-config-1" not registered |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-57fd58bc7b-kktql |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-oauth-apiserver"/"audit-1" not registered |
| (x4) | openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered |
| (x4) | openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-service-ca" : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-etcd-operator"/"etcd-operator-config" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-etcd-operator"/"etcd-client" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-7978bf889c-n64v4 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-ca" : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered |
| (x4) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-authentication-operator"/"serving-cert" not registered |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-authentication-operator"/"service-ca-bundle" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-authentication-operator"/"authentication-operator-config" not registered |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container routeoverride-cni | |
| (x4) | openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v429m" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-92p99" : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-9cnd5" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
BackOff |
Back-off restarting failed container machine-approver-controller in pod machine-approver-cb84b9cdf-qn94w_openshift-cluster-machine-approver(a9b62b2f-1e7a-4f1b-a988-4355d93dda46) | |
| (x4) | openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fn7fm" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-x22gr" : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-mhf9r" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered |
| (x4) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bwck4" : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-ltsnd" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wwv7s" : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" already present on machine | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8wh8g" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: routeoverride-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container whereabouts-cni-bincopy | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-image-registry"/"trusted-ca" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-route-controller-manager"/"serving-cert" not registered |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x5) | openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p6dpf" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-txl6b |
Started |
Started container ovnkube-controller | |
| (x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5mk6r" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
community-operators-7fwtv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zcqxx" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dmqvl" : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-djxkd" : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-42hmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_b8c123da-4cea-47e8-9f40-bcad75ea2654 became leader | |
| (x5) | openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered |
openshift-kube-controller-manager |
node-controller |
kube-controller-manager-master-0 |
NodeNotReady |
Node is not ready | |
openshift-network-console |
node-controller |
networking-console-plugin-7c696657b7-452tx |
NodeNotReady |
Node is not ready | |
openshift-image-registry |
node-controller |
node-ca-4p4zh |
NodeNotReady |
Node is not ready | |
openshift-monitoring |
node-controller |
telemeter-client-764cbf5554-kftwv |
NodeNotReady |
Node is not ready | |
openshift-monitoring |
node-controller |
monitoring-plugin-547cc9cc49-kqs4k |
NodeNotReady |
Node is not ready | |
openshift-cluster-machine-approver |
node-controller |
machine-approver-cb84b9cdf-qn94w |
NodeNotReady |
Node is not ready | |
openshift-console-operator |
node-controller |
console-operator-77df56447c-vsrxx |
NodeNotReady |
Node is not ready | |
openshift-marketplace |
node-controller |
community-operators-7fwtv |
NodeNotReady |
Node is not ready | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-oauth-apiserver |
node-controller |
apiserver-57fd58bc7b-kktql |
NodeNotReady |
Node is not ready | |
openshift-catalogd |
node-controller |
catalogd-controller-manager-754cfd84-qf898 |
NodeNotReady |
Node is not ready | |
openshift-service-ca |
node-controller |
service-ca-6b8bb995f7-b68p8 |
NodeNotReady |
Node is not ready | |
| (x2) | openshift-cluster-node-tuning-operator |
node-controller |
tuned-7zkbg |
NodeNotReady |
Node is not ready |
openshift-machine-config-operator |
node-controller |
kube-rbac-proxy-crio-master-0 |
NodeNotReady |
Node is not ready | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6f689c85c4 to 1 | |
openshift-machine-config-operator |
node-controller |
machine-config-daemon-2ztl9 |
NodeNotReady |
Node is not ready | |
openshift-ingress-canary |
node-controller |
ingress-canary-vkpv4 |
NodeNotReady |
Node is not ready | |
openshift-machine-api |
node-controller |
machine-api-operator-7486ff55f-wcnxg |
NodeNotReady |
Node is not ready | |
openshift-dns |
node-controller |
node-resolver-4xlhs |
NodeNotReady |
Node is not ready | |
openshift-operator-lifecycle-manager |
node-controller |
package-server-manager-75b4d49d4c-h599p |
NodeNotReady |
Node is not ready | |
openshift-operator-lifecycle-manager |
node-controller |
catalog-operator-7cf5cf757f-zgm6l |
NodeNotReady |
Node is not ready | |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-5fdc576499-j2n8j |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-machine-config-operator |
node-controller |
machine-config-server-pvrfs |
NodeNotReady |
Node is not ready | |
openshift-ovn-kubernetes |
node-controller |
ovnkube-control-plane-f9f7f4946-48mrg |
NodeNotReady |
Node is not ready | |
openshift-insights |
node-controller |
insights-operator-59d99f9b7b-74sss |
NodeNotReady |
Node is not ready | |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-555496955b-vpcbs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-monitoring |
node-controller |
metrics-server-555496955b-vpcbs |
NodeNotReady |
Node is not ready | |
openshift-kube-controller-manager-operator |
node-controller |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
NodeNotReady |
Node is not ready | |
openshift-console |
replicaset-controller |
console-6f689c85c4 |
SuccessfulCreate |
Created pod: console-6f689c85c4-fv97m | |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-7d7ddcf759-pvkrm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1" already present on machine |
| (x10) | openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x15) | openshift-ingress |
kubelet |
router-default-54f97f57-rr9px |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Started |
Started container machine-approver-controller |
| (x6) | openshift-authentication-operator |
kubelet |
authentication-operator-7479ffdf48-hpdzl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-cb84b9cdf-qn94w |
Created |
Created container: machine-approver-controller |
| (x8) | openshift-authentication |
kubelet |
oauth-openshift-747bdb58b5-mn76f |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered |
| (x10) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered |
| (x7) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-dns |
kubelet |
dns-default-5m4f8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-marketplace |
kubelet |
community-operators-7fwtv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-route-controller-manager |
kubelet |
route-controller-manager-74cff6cf84-bh8rz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" already present on machine | |
default |
kubelet |
master-0 |
NodeReady |
Node master-0 status is now: NodeReady | |
openshift-network-console |
multus |
networking-console-plugin-7c696657b7-452tx |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-monitoring |
multus |
metrics-server-555496955b-vpcbs |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-image-registry |
multus |
cluster-image-registry-operator-65dc4bcb88-96zcz |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-operator-664c9d94c9-9vfr4 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-84c998f64f-8stq7 |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
monitoring-plugin-547cc9cc49-kqs4k |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25b69045d961dc26719bc4cbb3a854737938b6e97375c04197e9cbc932541b17" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Started |
Started container machine-config-operator | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Started |
Started container multus-admission-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Created |
Created container: kube-rbac-proxy | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Created |
Created container: cluster-image-registry-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-65dc4bcb88-96zcz |
Started |
Started container cluster-image-registry-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Created |
Created container: multus-admission-controller | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30948d73ae763e995468b7e0767b855425ccbbbef13667a2fd3ba06b3c40a165" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-664c9d94c9-9vfr4 |
Created |
Created container: machine-config-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:903557bdbb44cf720481cc9b305a8060f327435d303c95e710b92669ff43d055" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Created |
Created container: check-endpoints | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-marketplace |
multus |
marketplace-operator-7d67745bb7-dwcxb |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601" already present on machine | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-monitoring |
multus |
thanos-querier-cc996c4bd-j4hzr |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" already present on machine | |
openshift-service-ca |
multus |
service-ca-6b8bb995f7-b68p8 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6" already present on machine | |
openshift-dns |
multus |
dns-default-5m4f8 |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Started |
Started container check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-6964bb78b7-g4lv2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-source-6964bb78b7-g4lv2 |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" already present on machine | |
openshift-monitoring |
multus |
telemeter-client-764cbf5554-kftwv |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
Started |
Started container networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-7c696657b7-452tx |
Created |
Created container: networking-console-plugin | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d" already present on machine | |
openshift-multus |
multus |
network-metrics-daemon-ch7xd |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Started |
Started container machine-config-controller | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.73:8443/healthz": dial tcp 10.128.0.73:8443: connect: connection refused | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
ProbeError |
Readiness probe error: Get "https://10.128.0.73:8443/healthz": dial tcp 10.128.0.73:8443: connect: connection refused body: | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f870aa3c7bcd039c7905b2c7a9e9c0776d76ed4cf34ccbef872ae7ad8cf2157f" already present on machine | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Created |
Created container: machine-config-controller | |
openshift-operator-lifecycle-manager |
multus |
packageserver-7c64dd9d8b-49skr |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
cluster-monitoring-operator-69cc794c58-mfjk2 |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4e0b20fdb38d516e871ff5d593c4273cc9933cb6a65ec93e727ca4a7777fd20" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-69cc794c58-mfjk2 |
Started |
Started container cluster-monitoring-operator | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine | |
openshift-machine-config-operator |
multus |
machine-config-controller-74cddd4fb5-phk6r |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-ingress-canary |
multus |
ingress-canary-vkpv4 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
multus |
kube-state-metrics-7dcc7f9bd6-68wml |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-565bdcb8-477pk |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0737727dcbfb50c3c09b69684ba3c07b5a4ab7652bbe4970a46d6a11c4a2bca" already present on machine | |
openshift-insights |
multus |
insights-operator-59d99f9b7b-74sss |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44e82a51fce7b5996b183c10c44bd79b0e1ae2257fd5809345fbca1c50aaa08f" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-547cc9cc49-kqs4k |
Started |
Started container monitoring-plugin | |
openshift-kube-scheduler |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Started |
Started container kube-controller-manager-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-74cddd4fb5-phk6r |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
multus |
dns-operator-6b7bcd6566-jh9m8 |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Created |
Created container: kube-scheduler-operator-container | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f574c6c79-86bh9 |
Started |
Started container kube-scheduler-operator-container | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-f84784664-ntb9w |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Created |
Created container: marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Started |
Started container marketplace-operator | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
multus |
redhat-marketplace-ddwmn |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator |
multus |
migrator-5bcf58cf9c-dvklg |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
Started |
Started container service-ca-controller | |
openshift-service-ca |
kubelet |
service-ca-6b8bb995f7-b68p8 |
Created |
Created container: service-ca-controller | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Started |
Started container openshift-controller-manager-operator | |
openshift-monitoring |
multus |
openshift-state-metrics-57cbc648f8-q4cgg |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Created |
Created container: openshift-controller-manager-operator | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Started |
Started container dns | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" already present on machine | |
openshift-apiserver |
multus |
apiserver-6985f84b49-v9vlg |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Created |
Created container: dns | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-7f88444875-6dk29 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-7c4697b5f5-9f69p |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Started |
Started container network-check-target-container | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Created |
Created container: network-check-target-container | |
openshift-network-diagnostics |
kubelet |
network-check-target-pcchm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-target-pcchm |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-6d64b47964-jjd7h |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-7978bf889c-n64v4 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
community-operators-7fwtv |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-b5dddf8f5-kwb74 |
Created |
Created container: kube-controller-manager-operator | |
openshift-controller-manager |
multus |
controller-manager-7d7ddcf759-pvkrm |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-667484ff5-n7qz8 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-75b4d49d4c-h599p |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-56f5898f45-fhnc5 |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-authentication |
multus |
oauth-openshift-747bdb58b5-mn76f |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-console |
multus |
console-6c9c84854-xf7nv |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-7c4dc67499-tjwg8 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Created |
Created container: network-metrics-daemon | |
openshift-insights |
kubelet |
insights-operator-59d99f9b7b-74sss |
Created |
Created container: insights-operator | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-machine-api |
multus |
cluster-baremetal-operator-5fdc576499-j2n8j |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-66f4cc99d4-x278n |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
certified-operators-t8rt7 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7b795784b8-44frm |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-6z4sc |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Started |
Started container network-metrics-daemon | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Created |
Created container: kube-state-metrics | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-57fd58bc7b-kktql |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Created |
Created container: packageserver | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7c64dd9d8b-49skr |
Started |
Started container packageserver | |
openshift-catalogd |
multus |
catalogd-controller-manager-754cfd84-qf898 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-589f5cdc9d-5h2kn |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-86897dd478-qqwh7 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-config-operator |
multus |
openshift-config-operator-68c95b6cf5-fmdmz |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-76bd5d69c7-fjrrg |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Started |
Started container prometheus-operator | |
openshift-console |
multus |
downloads-6f5db8559b-96ljh |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Created |
Created container: serve-healthcheck-canary | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Created |
Created container: cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bbd9b9dff-rrfsm |
Started |
Started container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-5f78c89466-bshxw |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-vkpv4 |
Started |
Started container serve-healthcheck-canary | |
openshift-authentication-operator |
multus |
authentication-operator-7479ffdf48-hpdzl |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-console-operator |
multus |
console-operator-77df56447c-vsrxx |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-5b557b5f57-s5s96 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-74cff6cf84-bh8rz |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-ingress-operator |
multus |
ingress-operator-85dbd94574-8jfp5 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
machine-api-operator-7486ff55f-wcnxg |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Created |
Created container: prometheus-operator | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-7cf5cf757f-zgm6l |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89b279931fe13f3b33c9dd6cdf0f5e7fc3e5384b944f998034d35af7242a47fa" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container extract-utilities | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9" already present on machine | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d886210d2faa9ace5750adfc70c0c3c5512cdf492f19d1c536a446db659aabb" already present on machine | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: extract-utilities | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" already present on machine | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e0e3400f1cb68a205bfb841b6b1a78045e7d80703830aa64979d46418d19c835" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Created |
Created container: kube-rbac-proxy-main | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: copy-catalogd-manifests | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" already present on machine | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" already present on machine | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: extract-utilities | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Started |
Started container kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Started |
Started container ingress-operator | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7d7ddcf759-pvkrm became leader | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Started |
Started container kube-storage-version-migrator-operator | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Started |
Started container service-ca-operator | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Created |
Created container: migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Started |
Started container migrator | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-86897dd478-qqwh7 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-86897dd478-qqwh7 became leader | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfc0403f71f7c926db1084c7fb5fb4f19007271213ee34f6f3d3eecdbe817d6b" already present on machine | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-56f5898f45-fhnc5 |
Created |
Created container: service-ca-operator | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Started |
Started container kube-apiserver-operator | |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
ProbeError |
Readiness probe error: Get "http://10.128.0.21:8080/healthz": dial tcp 10.128.0.21:8080: connect: connection refused body: |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-7d67745bb7-dwcxb |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.21:8080/healthz": dial tcp 10.128.0.21:8080: connect: connection refused |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Created |
Created container: snapshot-controller | |
openshift-monitoring |
kubelet |
prometheus-operator-565bdcb8-477pk |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-ch7xd |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-5m4f8 |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container copy-catalogd-manifests | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-86897dd478-qqwh7 |
Started |
Started container snapshot-controller | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5b557b5f57-s5s96 |
Created |
Created container: kube-apiserver-operator | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container extract-utilities | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" already present on machine | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz |
Created |
Created container: kube-storage-version-migrator-operator | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container extract-utilities | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Started |
Started container kube-rbac-proxy-main | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Created |
Created container: ingress-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Started |
Started container openshift-apiserver-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Started |
Started container kube-rbac-proxy | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Created |
Created container: openshift-api | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 950ms (950ms including waiting). Image size: 1609963837 bytes. | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Started |
Started container download-server | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Created |
Created container: download-server | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Started |
Started container openshift-api | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Started |
Started container csi-snapshot-controller-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Started |
Started container dns-operator | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Created |
Created container: console-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Created |
Created container: dns-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Started |
Started container control-plane-machine-set-operator | |
openshift-console-operator |
kubelet |
console-operator-77df56447c-vsrxx |
Started |
Started container console-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-66f4cc99d4-x278n |
Created |
Created container: control-plane-machine-set-operator | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Created |
Created container: kube-rbac-proxy | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-667484ff5-n7qz8 |
Created |
Created container: openshift-apiserver-operator | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b795784b8-44frm |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.59:8443/healthz": dial tcp 10.128.0.59:8443: connect: connection refused | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
ProbeError |
Readiness probe error: Get "https://10.128.0.59:8443/healthz": dial tcp 10.128.0.59:8443: connect: connection refused body: | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Started |
Started container olm-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Created |
Created container: cluster-storage-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f84784664-ntb9w |
Started |
Started container cluster-storage-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 747ms (747ms including waiting). Image size: 1204969293 bytes. | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container extract-utilities | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Created |
Created container: catalog-operator | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: kube-rbac-proxy-main | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-76bd5d69c7-fjrrg |
Created |
Created container: olm-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d41c3e944e86b73b4ba0d037ff016562211988f3206b9deb6cc7dccca708248" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-7cf5cf757f-zgm6l |
Started |
Started container catalog-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-7486ff55f-wcnxg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Started |
Started container kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container extract-content | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Created |
Created container: cluster-samples-operator | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 900ms (900ms including waiting). Image size: 1201319250 bytes. | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Created |
Created container: manager | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: copy-operator-controller-manifests | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Created |
Created container: package-server-manager | |
openshift-ingress-operator |
kubelet |
ingress-operator-85dbd94574-8jfp5 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Started |
Started container manager | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Created |
Created container: cluster-autoscaler-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Created |
Created container: openshift-config-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-68c95b6cf5-fmdmz |
Started |
Started container openshift-config-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" already present on machine | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7f88444875-6dk29 |
Started |
Started container cluster-autoscaler-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Started |
Started container kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-6b7bcd6566-jh9m8 |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: extract-content | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Created |
Created container: graceful-termination | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Started |
Started container cluster-samples-operator | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: extract-content | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Created |
Created container: cloud-credential-operator | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-75b4d49d4c-h599p |
Started |
Started container package-server-manager | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7c4dc67499-tjwg8 |
Started |
Started container cloud-credential-operator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bcf58cf9c-dvklg |
Started |
Started container graceful-termination | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e39fd49a8aa33e4b750267b4e773492b85c08cc7830cd7b22e64a92bcb5b6729" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-5f78c89466-bshxw |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-7dcc7f9bd6-68wml |
Created |
Created container: kube-rbac-proxy-self | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 706ms (706ms including waiting). Image size: 1129027903 bytes. | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: kube-rbac-proxy-self | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Started |
Started container cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-6d64b47964-jjd7h |
Created |
Created container: cluster-samples-operator-watch | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Created |
Created container: manager | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: extract-content | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-754cfd84-qf898 |
Started |
Started container manager | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Started |
Started container cluster-olm-operator | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 949ms (949ms including waiting). Image size: 912736453 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-589f5cdc9d-5h2kn |
Created |
Created container: cluster-olm-operator | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-57cbc648f8-q4cgg |
Started |
Started container openshift-state-metrics | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 420ms (420ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused | |
openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
ProbeError |
Liveness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: | |
| (x4) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused |
| (x4) | openshift-console |
kubelet |
downloads-6f5db8559b-96ljh |
ProbeError |
Readiness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
certified-operators-t8rt7 |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 4.581s (4.581s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-7fwtv |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6z4sc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ddwmn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 4.477s (4.477s including waiting). Image size: 912736453 bytes. | |
openshift-network-node-identity |
master-0_64ae3384-ad8b-4c7d-adba-df6cd096ce28 |
ovnkube-identity |
LeaderElection |
master-0_64ae3384-ad8b-4c7d-adba-df6cd096ce28 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
cert-recovery-controller |
openshift-kube-scheduler |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_8223fffd-ea2f-43dc-8346-8315d2012af6 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-cloud-controller-manager-operator |
master-0_8578c9ab-63f0-450e-8544-71723a34d1c4 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_8578c9ab-63f0-450e-8544-71723a34d1c4 became leader | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_d5c4821c-a58e-4f00-b5ec-5fc0ace7ae72 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_92ee1fc5-6ee7-4097-85ec-4dac6117c268 became leader | |
openshift-cloud-controller-manager-operator |
master-0_98c17d0f-b42a-4f52-a608-7f537fff4951 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_98c17d0f-b42a-4f52-a608-7f537fff4951 became leader | |
openshift-cluster-machine-approver |
master-0_8e61b660-b350-4100-9c6a-6115873b4220 |
cluster-machine-approver-leader |
LeaderElection |
master-0_8e61b660-b350-4100-9c6a-6115873b4220 became leader | |
openshift-machine-api |
control-plane-machine-set-operator-66f4cc99d4-x278n_c380145f-c126-4979-8f35-d0b4add1203d |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-66f4cc99d4-x278n_c380145f-c126-4979-8f35-d0b4add1203d became leader | |
openshift-catalogd |
catalogd-controller-manager-754cfd84-qf898_d471221a-4890-4e71-9dcb-dbbe3812c137 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-754cfd84-qf898_d471221a-4890-4e71-9dcb-dbbe3812c137 became leader | |
openshift-machine-api |
cluster-baremetal-operator-5fdc576499-j2n8j_b315f3ee-c11c-4c90-8130-d53524785a0a |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-5fdc576499-j2n8j_b315f3ee-c11c-4c90-8130-d53524785a0a became leader | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-controller |
operator-controller-controller-manager-5f78c89466-bshxw_960555bb-4ed9-4638-9c0a-a29f6c5fe650 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-5f78c89466-bshxw_960555bb-4ed9-4638-9c0a-a29f6c5fe650 became leader | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29412870 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412870 |
SuccessfulCreate |
Created pod: collect-profiles-29412870-qng6z | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_1216fb93-ec1c-4d02-8cc7-5ba2fa0e766a became leader | |
openshift-console |
default-scheduler |
console-6f689c85c4-fv97m |
Scheduled |
Successfully assigned openshift-console/console-6f689c85c4-fv97m to master-0 | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-vrhjw |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-vrhjw to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29412870-qng6z |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z to master-0 | |
openshift-marketplace |
default-scheduler |
redhat-operators-fh5cv |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-fh5cv to master-0 | |
openshift-marketplace |
default-scheduler |
certified-operators-5msvs |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-5msvs to master-0 | |
openshift-marketplace |
default-scheduler |
community-operators-szmjn |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-szmjn to master-0 | |
openshift-console |
multus |
console-6f689c85c4-fv97m |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-fh5cv |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-vrhjw |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Created |
Created container: extract-utilities | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412870-qng6z |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412870-qng6z |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29412870-qng6z |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-6f689c85c4-fv97m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine | |
openshift-console |
kubelet |
console-6f689c85c4-fv97m |
Created |
Created container: console | |
openshift-console |
kubelet |
console-6f689c85c4-fv97m |
Started |
Started container console | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
community-operators-szmjn |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412870-qng6z |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
multus |
certified-operators-5msvs |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 1.143s (1.143s including waiting). Image size: 1204969293 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.936s (1.936s including waiting). Image size: 1609963837 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.97s (1.97s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.903s (1.903s including waiting). Image size: 1201319250 bytes. | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412870 |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 1.224s (1.224s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 1.219s (1.219s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29412870, condition: Complete | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 1.274s (1.274s including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 700ms (701ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Created |
Created container: registry-server | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_7facee7f-698e-426a-8150-12ee45f5d04a became leader | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6c9c84854 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-6c9c84854 |
SuccessfulDelete |
Deleted pod: console-6c9c84854-xf7nv | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" | |
openshift-marketplace |
kubelet |
redhat-marketplace-vrhjw |
Killing |
Stopping container registry-server | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-7978bf889c-n64v4_dae53d35-cb43-4268-a21c-9516e5778819 became leader | |
openshift-marketplace |
kubelet |
community-operators-szmjn |
Killing |
Stopping container registry-server | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-74cff6cf84-bh8rz_26282a33-ddf6-4224-a7a6-7141f627f643 became leader | |
openshift-machine-api |
cluster-autoscaler-operator-7f88444875-6dk29_bc3dcaa6-e8d1-414e-a82e-a955d6b60de0 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-7f88444875-6dk29_bc3dcaa6-e8d1-414e-a82e-a955d6b60de0 became leader | |
openshift-marketplace |
kubelet |
redhat-operators-fh5cv |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-5msvs |
Killing |
Stopping container registry-server | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-7479ffdf48-hpdzl_535e137b-4dc3-4edf-85c5-9dcfb5f67de8 became leader | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-68c95b6cf5-fmdmz_d379d11e-21df-4480-90b2-3dedde08f6aa became leader | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from True to False ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from False to True ("CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: ") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b795784b8-44frm_87d4a7b0-15d6-4185-b904-12dad8a574ed became leader | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-b5dddf8f5-kwb74_a7d5c121-0999-4e97-8ce2-a457be70f246 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 3 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-65dc4bcb88-96zcz_7c20b2e6-7b0e-4861-afea-6f26152e434e became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-667484ff5-n7qz8_98f3548c-8be1-46cc-a611-28e1993e2785 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-6b8bb995f7-b68p8_22f764b5-c1bd-47bd-b725-8eb96f6c0e82 became leader | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_4ddaa579-7513-46ae-a5e0-adb991400eaa became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-4c6x2 |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-4c6x2 to master-0 | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-4c6x2 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-4c6x2 |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-4c6x2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-4c6x2 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-56f5898f45-fhnc5_fa68acdc-279d-480d-875f-da1f7d011634 became leader | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-4c6x2 |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-f84784664-ntb9w_371ee5d7-227d-40fd-8476-2529a8ca88fa became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bbd9b9dff-rrfsm_311de563-d0cd-4753-96dd-6e75852cb499 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-bbd9b9dff-rrfsm_311de563-d0cd-4753-96dd-6e75852cb499 became leader | |
openshift-multus |
multus |
multus-admission-controller-574cbf778d-hr92j |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-574cbf778d to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-574cbf778d |
SuccessfulCreate |
Created pod: multus-admission-controller-574cbf778d-hr92j | |
openshift-multus |
kubelet |
multus-admission-controller-574cbf778d-hr92j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" already present on machine | |
openshift-multus |
default-scheduler |
multus-admission-controller-574cbf778d-hr92j |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-574cbf778d-hr92j to master-0 | |
openshift-multus |
kubelet |
multus-admission-controller-574cbf778d-hr92j |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
replicaset-controller |
multus-admission-controller-84c998f64f |
SuccessfulDelete |
Deleted pod: multus-admission-controller-84c998f64f-8stq7 | |
openshift-multus |
kubelet |
multus-admission-controller-574cbf778d-hr92j |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-574cbf778d-hr92j |
Created |
Created container: multus-admission-controller | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-84c998f64f to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-574cbf778d-hr92j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-574cbf778d-hr92j |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-84c998f64f-8stq7 |
Killing |
Stopping container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-5b557b5f57-s5s96_95fedb1b-3bdf-49a1-95e0-c2efa2c8c672 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-459a0309a4bacb184a38028403c86289 to Done | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: ") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-459a0309a4bacb184a38028403c86289 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-459a0309a4bacb184a38028403c86289 | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-589f5cdc9d-5h2kn_2734fe26-bc02-46ab-9eb2-20a54b25f50a became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-4c6x2 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-67c4cff67d-q2lxz_660a9705-c23d-46c7-8a3d-5c30388f83fc became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_5bc363b3-9b3d-4b39-8608-66110192a6fc became leader | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_9ce4ba6f-ed55-4a8a-bf4a-54370538ef6c became leader | |
openshift-operator-lifecycle-manager |
package-server-manager-75b4d49d4c-h599p_8d6cc683-a8df-42e6-b0de-9604a4802882 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-75b4d49d4c-h599p_8d6cc683-a8df-42e6-b0de-9604a4802882 became leader | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Created |
Created container: machine-config-daemon |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-2ztl9 |
Started |
Started container machine-config-daemon |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.28} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a}] | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-459a0309a4bacb184a38028403c86289 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-459a0309a4bacb184a38028403c86289 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-459a0309a4bacb184a38028403c86289 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 3 to 4 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-7c4697b5f5-9f69p_d1f6796f-06a2-4eb1-ac8b-9fe1a88c224d became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-77df56447c-vsrxx_126f1d50-6d71-49c5-96ca-32b8d50832c9 became leader | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-5f574c6c79-86bh9_aa1722f9-a178-40ed-a4c9-df0b71c3e945 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("TargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 5 to 6 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine | |
openshift-kube-scheduler |
multus |
revision-pruner-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Created |
Created container: pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Started |
Started container pruner | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_107a7e46-ddbf-4d88-b0a4-3a7f0f92e986 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_5d0decf1-d57a-4e41-9c14-fd189e86e309 became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for sushy-emulator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
SuccessfulCreate |
Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd | |
openshift-marketplace |
default-scheduler |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Scheduled |
Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd to master-0 | |
openshift-marketplace |
multus |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Started |
Started container util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.496s (1.496s including waiting). Image size: 108204 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
Completed |
Job completed | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-storage |
default-scheduler |
lvms-operator-7b9fc4788d-lj42f |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-7b9fc4788d-lj42f to master-0 | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-7b9fc4788d to 1 | |
openshift-storage |
replicaset-controller |
lvms-operator-7b9fc4788d |
SuccessfulCreate |
Created pod: lvms-operator-7b9fc4788d-lj42f | |
openshift-storage |
multus |
lvms-operator-7b9fc4788d-lj42f |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
kubelet |
lvms-operator-7b9fc4788d-lj42f |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
kubelet |
lvms-operator-7b9fc4788d-lj42f |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.506s (4.506s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-7b9fc4788d-lj42f |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-7b9fc4788d-lj42f |
Started |
Started container manager | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-marketplace |
default-scheduler |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Scheduled |
Successfully assigned openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb to master-0 | |
openshift-marketplace |
job-controller |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3 |
SuccessfulCreate |
Created pod: 1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb | |
openshift-marketplace |
multus |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Started |
Started container util | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" | |
openshift-marketplace |
default-scheduler |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Scheduled |
Successfully assigned openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 to master-0 | |
openshift-marketplace |
job-controller |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397 |
SuccessfulCreate |
Created pod: af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 | |
openshift-marketplace |
multus |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Started |
Started container util | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a" | |
openshift-marketplace |
default-scheduler |
redhat-operators-9jwkv |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-9jwkv to master-0 | |
openshift-marketplace |
job-controller |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3 |
SuccessfulCreate |
Created pod: 5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r | |
openshift-marketplace |
multus |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-marketplace |
default-scheduler |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Scheduled |
Successfully assigned openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r to master-0 | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Created |
Created container: pull | |
openshift-marketplace |
multus |
redhat-operators-9jwkv |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a" in 1.413s (1.413s including waiting). Image size: 329358 bytes. | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" in 3.22s (3.22s including waiting). Image size: 105944483 bytes. | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Started |
Started container util | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47" | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 926ms (926ms including waiting). Image size: 1609963837 bytes. | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9 |
Created |
Created container: extract | |
openshift-marketplace |
job-controller |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5 |
SuccessfulCreate |
Created pod: 6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 | |
openshift-marketplace |
default-scheduler |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Scheduled |
Successfully assigned openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 to master-0 | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Started |
Started container extract-content | |
openshift-marketplace |
multus |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47" in 1.178s (1.178s including waiting). Image size: 176484 bytes. | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Started |
Started container util | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac" | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 492ms (492ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
job-controller |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3 |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac" in 1.308s (1.308s including waiting). Image size: 4896371 bytes. | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
RequirementsUnknown |
requirements not yet checked | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9 |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5 |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
RequirementsUnknown |
requirements not yet checked | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-nmstate |
default-scheduler |
nmstate-operator-5b5b58f5c8-ddls5 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5 to master-0 | |
openshift-marketplace |
kubelet |
redhat-operators-9jwkv |
Killing |
Stopping container registry-server | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-5b5b58f5c8 |
SuccessfulCreate |
Created pod: nmstate-operator-5b5b58f5c8-ddls5 | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-5b5b58f5c8 to 1 | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
InstallSucceeded |
waiting for install components to report healthy |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-d988cbf4b to 1 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-d988cbf4b |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-d988cbf4b-f589s | |
openshift-nmstate |
operator-lifecycle-manager |
install-6thfz |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202511191213" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. | |
metallb-system |
default-scheduler |
metallb-operator-controller-manager-d988cbf4b-f589s |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s to master-0 | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-7d9bbdcff5 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-7d9bbdcff5-k5wlt | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-7d9bbdcff5 to 1 | |
metallb-system |
default-scheduler |
metallb-operator-webhook-server-7d9bbdcff5-k5wlt |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
metallb-system |
multus |
metallb-operator-controller-manager-d988cbf4b-f589s |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-operator-5b5b58f5c8-ddls5 |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-d988cbf4b-f589s |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a" | |
metallb-system |
multus |
metallb-operator-webhook-server-7d9bbdcff5-k5wlt |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7d9bbdcff5-k5wlt |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-ddls5 |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
RequirementsUnknown |
requirements not yet checked | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
default |
cert-manager-istio-csr-controller |
ControllerStarted |
controller is starting | ||
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-86cb77c54b to 1 | |
cert-manager |
replicaset-controller |
cert-manager-webhook-f4fb5df64 |
FailedCreate |
Error creating: pods "cert-manager-webhook-f4fb5df64-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found | |
cert-manager |
default-scheduler |
cert-manager-webhook-f4fb5df64-kfv5q |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-f4fb5df64-kfv5q to master-0 | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
NeedsReinstall |
calculated deployment install is bad | |
metallb-system |
operator-lifecycle-manager |
install-pkdzx |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202511181540" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
cert-manager |
replicaset-controller |
cert-manager-webhook-f4fb5df64 |
SuccessfulCreate |
Created pod: cert-manager-webhook-f4fb5df64-kfv5q | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-f4fb5df64 to 1 | |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
cert-manager |
default-scheduler |
cert-manager-cainjector-855d9ccff4-9qmpd |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd to master-0 | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-855d9ccff4 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-855d9ccff4-9qmpd | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-855d9ccff4 to 1 | |
| (x11) | cert-manager |
replicaset-controller |
cert-manager-86cb77c54b |
FailedCreate |
Error creating: pods "cert-manager-86cb77c54b-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
metallb-system |
kubelet |
metallb-operator-webhook-server-7d9bbdcff5-k5wlt |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-controller-manager-d988cbf4b-f589s |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a" in 10.311s (10.311s including waiting). Image size: 457005415 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7d9bbdcff5-k5wlt |
Started |
Started container webhook-server | |
metallb-system |
kubelet |
metallb-operator-controller-manager-d988cbf4b-f589s |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-d988cbf4b-f589s |
Started |
Started container manager | |
cert-manager |
multus |
cert-manager-cainjector-855d9ccff4-9qmpd |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-ddls5 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf" in 10.293s (10.293s including waiting). Image size: 445876816 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-ddls5 |
Created |
Created container: nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-ddls5 |
Started |
Started container nmstate-operator | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7d9bbdcff5-k5wlt |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" in 10.504s (10.504s including waiting). Image size: 549581950 bytes. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
AllRequirementsMet |
all requirements found, attempting install | |
cert-manager |
kubelet |
cert-manager-cainjector-855d9ccff4-9qmpd |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" | |
openshift-operators |
replicaset-controller |
observability-operator-d8bb48f5d |
SuccessfulCreate |
Created pod: observability-operator-d8bb48f5d-25nhp | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-kfv5q |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" | |
openshift-operators |
default-scheduler |
observability-operator-d8bb48f5d-25nhp |
Scheduled |
Successfully assigned openshift-operators/observability-operator-d8bb48f5d-25nhp to master-0 | |
openshift-operators |
default-scheduler |
obo-prometheus-operator-668cf9dfbb-4dft5 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5 to master-0 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-7b955f4bd8 to 2 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7b955f4bd8 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7b955f4bd8 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg | |
cert-manager |
multus |
cert-manager-webhook-f4fb5df64-kfv5q |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
metallb-system |
metallb-operator-controller-manager-d988cbf4b-f589s_abb402a4-24a2-4d25-82cb-f921707b8070 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-d988cbf4b-f589s_abb402a4-24a2-4d25-82cb-f921707b8070 became leader | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-668cf9dfbb to 1 | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-d8bb48f5d to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-668cf9dfbb |
SuccessfulCreate |
Created pod: obo-prometheus-operator-668cf9dfbb-4dft5 | |
openshift-operators |
default-scheduler |
obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg to master-0 | |
openshift-operators |
default-scheduler |
obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf to master-0 | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-operators |
default-scheduler |
perses-operator-5446b9c989-vxj9b |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5446b9c989-vxj9b to master-0 | |
openshift-operators |
replicaset-controller |
perses-operator-5446b9c989 |
SuccessfulCreate |
Created pod: perses-operator-5446b9c989-vxj9b | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5446b9c989 to 1 | |
openshift-operators |
multus |
obo-prometheus-operator-668cf9dfbb-4dft5 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
multus |
perses-operator-5446b9c989-vxj9b |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-operators |
multus |
observability-operator-d8bb48f5d-25nhp |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-86cb77c54b |
SuccessfulCreate |
Created pod: cert-manager-86cb77c54b-k7j45 | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-4dft5 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-25nhp |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-vxj9b |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" | |
cert-manager |
default-scheduler |
cert-manager-86cb77c54b-k7j45 |
Scheduled |
Successfully assigned cert-manager/cert-manager-86cb77c54b-k7j45 to master-0 | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
cert-manager |
multus |
cert-manager-86cb77c54b-k7j45 |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 7.506s (7.506s including waiting). Image size: 258533084 bytes. | |
cert-manager |
kubelet |
cert-manager-86cb77c54b-k7j45 |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" already present on machine | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-kfv5q |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 13.407s (13.407s including waiting). Image size: 427346153 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-4dft5 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" in 8.313s (8.313s including waiting). Image size: 306562378 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 7.495s (7.495s including waiting). Image size: 258533084 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-855d9ccff4-9qmpd |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 13.375s (13.375s including waiting). Image size: 427346153 bytes. | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-vxj9b |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" in 7.476s (7.476s including waiting). Image size: 282278649 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-4dft5 |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-4dft5 |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-vxj9b |
Created |
Created container: perses-operator | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-kfv5q |
Created |
Created container: cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-86cb77c54b-k7j45 |
Created |
Created container: cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-86cb77c54b-k7j45 |
Started |
Started container cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-cainjector-855d9ccff4-9qmpd |
Started |
Started container cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-kfv5q |
Started |
Started container cert-manager-webhook | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-vxj9b |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg |
Created |
Created container: prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-855d9ccff4-9qmpd |
Created |
Created container: cert-manager-cainjector | |
kube-system |
cert-manager-cainjector-855d9ccff4-9qmpd_c0be0226-706e-4372-9ab8-c5200286a425 |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-855d9ccff4-9qmpd_c0be0226-706e-4372-9ab8-c5200286a425 became leader | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-25nhp |
Started |
Started container operator | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallWaiting |
installing: waiting for deployment observability-operator to become ready: deployment "observability-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-25nhp |
Created |
Created container: operator | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-25nhp |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" in 11.851s (11.851s including waiting). Image size: 500139589 bytes. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallSucceeded |
install strategy completed with no errors | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-86cb77c54b-k7j45-external-cert-manager-controller became leader | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-7fcb986d4 |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-7fcb986d4-4gsth | |
metallb-system |
default-scheduler |
frr-k8s-g7b5b |
Scheduled |
Successfully assigned metallb-system/frr-k8s-g7b5b to master-0 | |
metallb-system |
default-scheduler |
controller-f8648f98b-pf6cm |
Scheduled |
Successfully assigned metallb-system/controller-f8648f98b-pf6cm to master-0 | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-g7b5b | |
metallb-system |
default-scheduler |
speaker-4gqc8 |
Scheduled |
Successfully assigned metallb-system/speaker-4gqc8 to master-0 | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-7fcb986d4 to 1 | |
metallb-system |
default-scheduler |
frr-k8s-webhook-server-7fcb986d4-4gsth |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth to master-0 | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-4gqc8 | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: cf859164-ac62-4746-b8df-ee08a79c8f75] does not exist in namespace "" | |
metallb-system |
replicaset-controller |
controller-f8648f98b |
SuccessfulCreate |
Created pod: controller-f8648f98b-pf6cm | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-f8648f98b to 1 | |
metallb-system |
multus |
frr-k8s-webhook-server-7fcb986d4-4gsth |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-4gsth |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
Created |
Created container: controller | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine | |
| (x3) | metallb-system |
kubelet |
speaker-4gqc8 |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
multus |
controller-f8648f98b-pf6cm |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-nmstate |
default-scheduler |
nmstate-console-plugin-7fbb5f6569-nrqx6 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6 to master-0 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5cc76c75d9 to 1 | |
default |
endpoint-controller |
nmstate-console-plugin |
FailedToCreateEndpoint |
Failed to create endpoint for service openshift-nmstate/nmstate-console-plugin: endpoints "nmstate-console-plugin" already exists | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-5f6d4c5ccb to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-7fbb5f6569 to 1 | |
openshift-nmstate |
default-scheduler |
nmstate-handler-6x7jt |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-6x7jt to master-0 | |
openshift-nmstate |
kubelet |
nmstate-handler-6x7jt |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-6x7jt | |
openshift-nmstate |
default-scheduler |
nmstate-metrics-7f946cbc9-jdqp5 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5 to master-0 | |
openshift-console |
default-scheduler |
console-5cc76c75d9-9mh28 |
Scheduled |
Successfully assigned openshift-console/console-5cc76c75d9-9mh28 to master-0 | |
openshift-nmstate |
multus |
nmstate-metrics-7f946cbc9-jdqp5 |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-5f6d4c5ccb |
SuccessfulCreate |
Created pod: nmstate-webhook-5f6d4c5ccb-7xtkl | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-7f946cbc9 |
SuccessfulCreate |
Created pod: nmstate-metrics-7f946cbc9-jdqp5 | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-7f946cbc9 to 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
openshift-nmstate |
default-scheduler |
nmstate-webhook-5f6d4c5ccb-7xtkl |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl to master-0 | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-7xtkl |
FailedMount |
MountVolume.SetUp failed for volume "tls-key-pair" : secret "openshift-nmstate-webhook" not found | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.28, 1 replicas available" | |
openshift-console |
replicaset-controller |
console-5cc76c75d9 |
SuccessfulCreate |
Created pod: console-5cc76c75d9-9mh28 | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-7fbb5f6569 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-7fbb5f6569-nrqx6 | |
openshift-nmstate |
multus |
nmstate-webhook-5f6d4c5ccb-7xtkl |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5cc76c75d9-9mh28 |
Started |
Started container console | |
openshift-console |
kubelet |
console-5cc76c75d9-9mh28 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-5cc76c75d9-9mh28 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine | |
openshift-console |
multus |
console-5cc76c75d9-9mh28 |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-jdqp5 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" | |
openshift-nmstate |
multus |
nmstate-console-plugin-7fbb5f6569-nrqx6 |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-nrqx6 |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513" | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-4gqc8 |
Started |
Started container speaker | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-f8648f98b-pf6cm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" in 2.565s (2.565s including waiting). Image size: 459566572 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-7xtkl |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" | |
metallb-system |
kubelet |
speaker-4gqc8 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine | |
metallb-system |
kubelet |
speaker-4gqc8 |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-4gqc8 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-nrqx6 |
Created |
Created container: nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-handler-6x7jt |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-7xtkl |
Started |
Started container nmstate-webhook | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-4gsth |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 7.892s (7.892s including waiting). Image size: 656503086 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-4gsth |
Created |
Created container: frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-nrqx6 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513" in 5.193s (5.193s including waiting). Image size: 447845824 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-7xtkl |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 4.263s (4.263s including waiting). Image size: 492626754 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-nrqx6 |
Started |
Started container nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-4gsth |
Started |
Started container frr-k8s-webhook-server | |
metallb-system |
kubelet |
speaker-4gqc8 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-4gqc8 |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-jdqp5 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 5.292s (5.292s including waiting). Image size: 492626754 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-jdqp5 |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-jdqp5 |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-jdqp5 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-handler-6x7jt |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-7xtkl |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-handler-6x7jt |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 5.852s (5.852s including waiting). Image size: 492626754 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-jdqp5 |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-jdqp5 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 8.073s (8.073s including waiting). Image size: 656503086 bytes. | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-g7b5b |
Created |
Created container: reloader | |
openshift-console |
kubelet |
console-6f689c85c4-fv97m |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-6f689c85c4 |
SuccessfulDelete |
Deleted pod: console-6f689c85c4-fv97m | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6f689c85c4 to 0 from 1 | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.28, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.28, 2 replicas available" | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-pq5qt | |
openshift-storage |
default-scheduler |
vg-manager-pq5qt |
Scheduled |
Successfully assigned openshift-storage/vg-manager-pq5qt to master-0 | |
openshift-storage |
multus |
vg-manager-pq5qt |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
| (x11) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x2) | openshift-storage |
kubelet |
vg-manager-pq5qt |
Started |
Started container vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-pq5qt |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-pq5qt |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
openstack-operators |
kubelet |
openstack-operator-index-zjrm9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-zjrm9 |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
openstack-operators |
default-scheduler |
openstack-operator-index-zjrm9 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-zjrm9 to master-0 | |
| (x5) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index |
openstack-operators |
kubelet |
openstack-operator-index-zjrm9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.487s (1.487s including waiting). Image size: 913061644 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-zjrm9 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-zjrm9 |
Started |
Started container registry-server | |
| (x5) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.188.72:50051: connect: connection refused" |
openstack-operators |
job-controller |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864703e2 |
SuccessfulCreate |
Created pod: 98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 | |
openstack-operators |
default-scheduler |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Scheduled |
Successfully assigned openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 to master-0 | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Started |
Started container util | |
openstack-operators |
multus |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Created |
Created container: util | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:b102924657dd294d08db769acac26201e395a333" | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:b102924657dd294d08db769acac26201e395a333" in 810ms (810ms including waiting). Image size: 108093 bytes. | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Created |
Created container: pull | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Started |
Started container pull | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Started |
Started container extract | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine | |
openstack-operators |
kubelet |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925 |
Created |
Created container: extract | |
openstack-operators |
job-controller |
98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864703e2 |
Completed |
Job completed | |
openstack-operators |
default-scheduler |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s to master-0 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" not available: Deployment does not have minimum availability. | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-operator-7dd5c7bb7c to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-7dd5c7bb7c |
SuccessfulCreate |
Created pod: openstack-operator-controller-operator-7dd5c7bb7c-clg7s | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf" | |
openstack-operators |
multus |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf" in 4.256s (4.256s including waiting). Image size: 292248394 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s |
Started |
Started container operator | |
openstack-operators |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s_77c8af05-ebca-450d-af7f-f87179ea203f |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s_77c8af05-ebca-450d-af7f-f87179ea203f became leader | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-operator-7b84d49558 to 1 | |
openstack-operators |
default-scheduler |
openstack-operator-controller-operator-7b84d49558-t8dx9 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9 to master-0 | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallSucceeded |
waiting for install components to report healthy |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-7b84d49558 |
SuccessfulCreate |
Created pod: openstack-operator-controller-operator-7b84d49558-t8dx9 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
ComponentUnhealthy |
installing: deployment changed old hash=1KEL87dso94VTXKOtktBoUrrGqQm2yl8jPcLKu, new hash=admOte3XFo6hKgre4VXGGD1lFfL8qoSFymtHdE | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" waiting for 1 outdated replica(s) to be terminated | |
openstack-operators |
multus |
openstack-operator-controller-operator-7b84d49558-t8dx9 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7b84d49558-t8dx9 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7b84d49558-t8dx9 |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7b84d49558-t8dx9 |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-7dd5c7bb7c-clg7s |
Killing |
Stopping container operator | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-7dd5c7bb7c |
SuccessfulDelete |
Deleted pod: openstack-operator-controller-operator-7dd5c7bb7c-clg7s | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallSucceeded |
install strategy completed with no errors |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled down replica set openstack-operator-controller-operator-7dd5c7bb7c to 0 from 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack-operators |
openstack-operator-controller-operator-7b84d49558-t8dx9_898d9480-8823-45a3-83c4-0241786caa9e |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-operator-7b84d49558-t8dx9_898d9480-8823-45a3-83c4-0241786caa9e became leader | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-pwkjb" | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-8sknn" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-8dt5j" | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-xqgb2" | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-hxq54" | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-jvbft" | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-sx6tp" | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-fv9lj" | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-bq7mz" | |
openstack-operators |
default-scheduler |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f to master-0 | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-g9mxn" | |
openstack-operators |
default-scheduler |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq to master-0 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-5cd89994b5 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-5cd89994b5-2gn4f | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-58b8dcc5fb |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-58b8dcc5fb-crqhq | |
openstack-operators |
default-scheduler |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4 to master-0 | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-f6cc97788 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-6b64f6f645 to 1 | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-6b64f6f645 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-6b64f6f645-zgkn7 | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-7c9bfd6967 to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-7c9bfd6967 |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-7c9bfd6967-s2sbx | |
openstack-operators |
default-scheduler |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7 to master-0 | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
default-scheduler |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp to master-0 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-f8856dd79 |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-f8856dd79-rbqv4 | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-f8856dd79 to 1 | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-58b8dcc5fb to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-4frks" | |
openstack-operators |
default-scheduler |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr to master-0 | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-6b9b669fdb to 1 | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-6b9b669fdb |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-6b9b669fdb-tvkgp | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-f6cc97788 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-f6cc97788-v9zzp | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-56f9fbf74b |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-56f9fbf74b-p2kpr | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-56f9fbf74b to 1 | |
openstack-operators |
default-scheduler |
mariadb-operator-controller-manager-647d75769b-l99kn |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn to master-0 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-57d98476c4 to 1 | |
openstack-operators |
default-scheduler |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw to master-0 | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-7fd96594c7 to 1 | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-7fd96594c7 |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-7fd96594c7-24pft | |
openstack-operators |
default-scheduler |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx to master-0 | |
openstack-operators |
default-scheduler |
swift-operator-controller-manager-696b999796-zd6d2 |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-696b999796-zd6d2 to master-0 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-84bc9f68f5 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-84bc9f68f5-s8bzw | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-84bc9f68f5 to 1 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-6cb6d6b947 to 1 | |
openstack-operators |
default-scheduler |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 to master-0 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-696b999796 |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-696b999796-zd6d2 | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-696b999796 to 1 | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-6cb6d6b947 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-647d75769b |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-647d75769b-l99kn | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-647d75769b to 1 | |
openstack-operators |
default-scheduler |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd to master-0 | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-7d9c9d7fd8 to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-7d9c9d7fd8 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-7d9c9d7fd8-f4ttw | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-5cd89994b5 to 1 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-647f96877 to 1 | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-647f96877 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-647f96877-kf2cl | |
openstack-operators |
default-scheduler |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9 to master-0 | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-865fc86d5b to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-7cdd6b54fb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-7cdd6b54fb-d2kpd | |
openstack-operators |
default-scheduler |
ovn-operator-controller-manager-647f96877-kf2cl |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl to master-0 | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-7cdd6b54fb to 1 | |
openstack-operators |
default-scheduler |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4 to master-0 | |
openstack-operators |
default-scheduler |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx to master-0 | |
openstack-operators |
default-scheduler |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw to master-0 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-78cd4f7769 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-78cd4f7769-lmlf9 | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-78cd4f7769 to 1 | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7b5867bfc7 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7b5867bfc7-b7fd4 | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7b5867bfc7 to 1 | |
openstack-operators |
default-scheduler |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp to master-0 | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-845b79dc4f to 1 | |
openstack-operators |
default-scheduler |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr to master-0 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-845b79dc4f |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-845b79dc4f-cj7z9 | |
openstack-operators |
default-scheduler |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9 to master-0 | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-57dfcdd5b8 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-57dfcdd5b8-vmnjr | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-57dfcdd5b8 to 1 | |
openstack-operators |
default-scheduler |
heat-operator-controller-manager-7fd96594c7-24pft |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft to master-0 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-865fc86d5b |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-865fc86d5b-pk8dx | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" | |
openstack-operators |
multus |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-78955d896f to 1 | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-78955d896f |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-78955d896f-bbcpt | |
openstack-operators |
multus |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openstack-operators |
multus |
heat-operator-controller-manager-7fd96594c7-24pft |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openstack-operators |
default-scheduler |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt to master-0 | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-4x5h5" | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-57d98476c4 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-57d98476c4-856ml | |
openstack-operators |
multus |
horizon-operator-controller-manager-f6cc97788-v9zzp |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" | |
openstack-operators |
default-scheduler |
openstack-operator-controller-manager-57d98476c4-856ml |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml to master-0 | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
openstack-operators |
multus |
ovn-operator-controller-manager-647f96877-kf2cl |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" | |
openstack-operators |
multus |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
openstack-operators |
multus |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" | |
openstack-operators |
multus |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" | |
openstack-operators |
multus |
nova-operator-controller-manager-865fc86d5b-pk8dx |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openstack-operators |
multus |
mariadb-operator-controller-manager-647d75769b-l99kn |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" | |
openstack-operators |
multus |
swift-operator-controller-manager-696b999796-zd6d2 |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-ft6fp" | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
openstack-operators |
multus |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-4k5k2" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-lw5hk" | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2": pull QPS exceeded | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-smnkl" | |
| (x2) | openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Failed |
Error: ImagePullBackOff |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-9bdg8" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-8jbvl" | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-m6b4p" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openshift-marketplace |
default-scheduler |
certified-operators-j57qg |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-j57qg to master-0 | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-6swkv" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-mjjl8" | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-48bkh" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-cj5pt" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
| (x5) | openstack-operators |
kubelet |
openstack-operator-controller-manager-57d98476c4-856ml |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
| (x5) | openstack-operators |
kubelet |
openstack-operator-controller-manager-57d98476c4-856ml |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" in 18.498s (18.498s including waiting). Image size: 190919617 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" in 17.724s (17.724s including waiting). Image size: 195747812 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" in 18.929s (18.929s including waiting). Image size: 189868493 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" in 18.605s (18.605s including waiting). Image size: 189260496 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" in 18.961s (18.961s including waiting). Image size: 191230375 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" in 17.78s (17.78s including waiting). Image size: 188866491 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" in 19.013s (19.013s including waiting). Image size: 191652289 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" in 18.186s (18.186s including waiting). Image size: 190094746 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" in 19.194s (19.194s including waiting). Image size: 194596839 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" in 17.799s (17.799s including waiting). Image size: 190053350 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" in 19.283s (19.283s including waiting). Image size: 191083456 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" in 18.629s (18.629s including waiting). Image size: 190758360 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" in 17.747s (17.747s including waiting). Image size: 177172942 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" in 18.164s (18.164s including waiting). Image size: 192837582 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" in 18.16s (18.16s including waiting). Image size: 190697931 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" in 18.9s (18.9s including waiting). Image size: 191302081 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" in 17.769s (17.769s including waiting). Image size: 191790512 bytes. | |
openstack-operators |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd_950cb04e-3979-43ad-9826-c4f8f0a0ef2e |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd_950cb04e-3979-43ad-9826-c4f8f0a0ef2e became leader | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" in 18.616s (18.616s including waiting). Image size: 192218533 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
horizon-operator-controller-manager-f6cc97788-v9zzp_bedc4f2b-34c0-41cc-bb8c-9189ac3a1448 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-f6cc97788-v9zzp_bedc4f2b-34c0-41cc-bb8c-9189ac3a1448 became leader | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Created |
Created container: manager | |
openstack-operators |
multus |
openstack-operator-controller-manager-57d98476c4-856ml |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Started |
Started container manager | |
openstack-operators |
mariadb-operator-controller-manager-647d75769b-l99kn_c82820e4-d3ed-4453-8927-7d3918344b2e |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-647d75769b-l99kn_c82820e4-d3ed-4453-8927-7d3918344b2e became leader | |
openstack-operators |
manila-operator-controller-manager-56f9fbf74b-p2kpr_0e4e8474-d189-49d6-953f-ac876c9b1a54 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-56f9fbf74b-p2kpr_0e4e8474-d189-49d6-953f-ac876c9b1a54 became leader | |
openstack-operators |
barbican-operator-controller-manager-5cd89994b5-2gn4f_e835c9cc-6259-4c67-bc67-2de98cbc7052 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-5cd89994b5-2gn4f_e835c9cc-6259-4c67-bc67-2de98cbc7052 became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Started |
Started container manager | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" in 18.333s (18.333s including waiting). Image size: 193269376 bytes. | |
openstack-operators |
heat-operator-controller-manager-7fd96594c7-24pft_9d67f0f7-64d3-4c3b-a3b0-9d69986044d2 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-7fd96594c7-24pft_9d67f0f7-64d3-4c3b-a3b0-9d69986044d2 became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Created |
Created container: manager | |
openshift-marketplace |
multus |
certified-operators-j57qg |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openstack-operators |
multus |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
octavia-operator-controller-manager-845b79dc4f-cj7z9_d0b1e549-71c5-4483-b434-9793f3c65299 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-845b79dc4f-cj7z9_d0b1e549-71c5-4483-b434-9793f3c65299 became leader | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" | |
openstack-operators |
watcher-operator-controller-manager-6b9b669fdb-tvkgp_0fd82460-42e4-43ce-9cad-12c2fec1200a |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-6b9b669fdb-tvkgp_0fd82460-42e4-43ce-9cad-12c2fec1200a became leader | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Started |
Started container manager | |
openstack-operators |
test-operator-controller-manager-57dfcdd5b8-vmnjr_5095d5ac-140b-4524-bf38-ea37b30ddbd2 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-57dfcdd5b8-vmnjr_5095d5ac-140b-4524-bf38-ea37b30ddbd2 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-57d98476c4-856ml |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-57d98476c4-856ml |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Created |
Created container: extract-utilities | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Started |
Started container manager | |
openstack-operators |
cinder-operator-controller-manager-f8856dd79-rbqv4_654bd0bc-ddad-403b-82e9-a7fddb2591ab |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-f8856dd79-rbqv4_654bd0bc-ddad-403b-82e9-a7fddb2591ab became leader | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Started |
Started container extract-utilities | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-57d98476c4-856ml |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Created |
Created container: manager | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Started |
Started container manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 11.735s (11.735s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
ironic-operator-controller-manager-7c9bfd6967-s2sbx_68a20fd4-41d1-42cf-b4f7-ea9574b9e700 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-7c9bfd6967-s2sbx_68a20fd4-41d1-42cf-b4f7-ea9574b9e700 became leader | |
openstack-operators |
nova-operator-controller-manager-865fc86d5b-pk8dx_2e68864f-5501-472b-aa96-a5fb3e874bc3 |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-865fc86d5b-pk8dx_2e68864f-5501-472b-aa96-a5fb3e874bc3 became leader | |
openstack-operators |
ovn-operator-controller-manager-647f96877-kf2cl_87839743-3e5a-42ef-9667-36db487f9089 |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-647f96877-kf2cl_87839743-3e5a-42ef-9667-36db487f9089 became leader | |
openstack-operators |
designate-operator-controller-manager-84bc9f68f5-s8bzw_f6a0463f-6532-4475-9d77-15b9204e54a8 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-84bc9f68f5-s8bzw_f6a0463f-6532-4475-9d77-15b9204e54a8 became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Started |
Started container operator | |
openstack-operators |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4_7ee3d0ec-5063-4781-a675-389e9a337992 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4_7ee3d0ec-5063-4781-a675-389e9a337992 became leader | |
openstack-operators |
keystone-operator-controller-manager-58b8dcc5fb-crqhq_a998ce08-2327-43bf-b350-d6d56117bfaa |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-58b8dcc5fb-crqhq_a998ce08-2327-43bf-b350-d6d56117bfaa became leader | |
openstack-operators |
swift-operator-controller-manager-696b999796-zd6d2_28196382-8619-4367-a80a-d2869124d37f |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-696b999796-zd6d2_28196382-8619-4367-a80a-d2869124d37f became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt |
Created |
Created container: operator | |
openstack-operators |
openstack-operator-controller-manager-57d98476c4-856ml_da7c1dd0-2d57-4ff1-b507-cd246cdaad9a |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-57d98476c4-856ml_da7c1dd0-2d57-4ff1-b507-cd246cdaad9a became leader | |
openshift-marketplace |
default-scheduler |
community-operators-qc57x |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-qc57x to master-0 | |
openstack-operators |
placement-operator-controller-manager-6b64f6f645-zgkn7_e77f30b8-2142-47da-ad73-1ff51e24a524 |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-6b64f6f645-zgkn7_e77f30b8-2142-47da-ad73-1ff51e24a524 became leader | |
openstack-operators |
glance-operator-controller-manager-78cd4f7769-lmlf9_3dc73e1c-3f34-4c11-ab61-9e578f6663e8 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-78cd4f7769-lmlf9_3dc73e1c-3f34-4c11-ab61-9e578f6663e8 became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt_b5e1d256-df3a-42ee-888b-c233b2c27a90 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-78955d896f-bbcpt_b5e1d256-df3a-42ee-888b-c233b2c27a90 became leader | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-marketplace |
multus |
community-operators-qc57x |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 4.246s (4.246s including waiting). Image size: 1204969293 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" in 12.293s (12.293s including waiting). Image size: 190602344 bytes. | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-tpxmq |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-tpxmq to master-0 | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 4.376s (4.377s including waiting). Image size: 1201319250 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" in 13.813s (13.813s including waiting). Image size: 179448753 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 4.599s (4.599s including waiting). Image size: 912736453 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 10.921s (10.921s including waiting). Image size: 68421467 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 11.033s (11.033s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 10.387s (10.387s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-s8bzw |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 10.86s (10.86s including waiting). Image size: 68421467 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 16.046s (16.046s including waiting). Image size: 68421467 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Started |
Started container extract-content | |
openshift-marketplace |
multus |
redhat-marketplace-tpxmq |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 13.76s (13.76s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-vmnjr |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 16.008s (16.008s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-rbqv4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 13.734s (13.734s including waiting). Image size: 68421467 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Created |
Created container: extract-content | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 16.097s (16.097s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 14.152s (14.152s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 15.606s (15.606s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-24pft |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine | |
openstack-operators |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5_5901b083-320d-4fb0-b0c8-b9e5f859e6fc |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5_5901b083-320d-4fb0-b0c8-b9e5f859e6fc became leader | |
openstack-operators |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw_a1c8bc7a-cdc6-45db-964b-26aabbc9217a |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw_a1c8bc7a-cdc6-45db-964b-26aabbc9217a became leader | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 17.236s (17.236s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 12.002s (12.002s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 16.863s (16.863s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 11.205s (11.205s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-d2kpd |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Created |
Created container: extract-utilities | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 16.859s (16.859s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-kf2cl |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 9.828s (9.828s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 11.44s (11.44s including waiting). Image size: 68421467 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-zgkn7 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-tvkgp |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 569ms (569ms including waiting). Image size: 912736453 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-b7fd4 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-l99kn |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 12.005s (12.005s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f4ttw |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-lmlf9 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-p2kpr |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Created |
Created container: extract-content | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-s2sbx |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-pk8dx |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-v9zzp |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Started |
Started container extract-content | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-zd6d2 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-crqhq |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 796ms (796ms including waiting). Image size: 1129027903 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-2gn4f |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-cj7z9 |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 430ms (430ms including waiting). Image size: 912736453 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-tpxmq |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-j57qg |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-qc57x |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29412885-9lr8x |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412885-9lr8x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29412885 | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29412885-9lr8x |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x to master-0 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412885 |
SuccessfulCreate |
Created pod: collect-profiles-29412885-9lr8x | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412885-9lr8x |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29412885-9lr8x |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29412840 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29412885, condition: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29412885 |
Completed |
Job completed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-7lc2b namespace |