Time Namespace Component RelatedObject Reason Message

openshift-monitoring

thanos-querier-5cd89459d5-wwnjs

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-5cd89459d5-wwnjs to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

assisted-installer

assisted-installer-controller-v949k

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-v949k to master-0

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-8krst

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6686554ddc-8krst to master-0

openshift-marketplace

redhat-marketplace-4fjw9

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-4fjw9 to master-0

openshift-marketplace

redhat-marketplace-4r9ht

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-4r9ht to master-0

openshift-authentication-operator

authentication-operator-7c6989d6c4-dkqc4

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-7c6989d6c4-dkqc4 to master-0

openshift-multus

multus-admission-controller-8d675b596-jgdmb

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-8d675b596-jgdmb to master-0

openshift-marketplace

certified-operators-lqc4n

Scheduled

Successfully assigned openshift-marketplace/certified-operators-lqc4n to master-0

openshift-authentication-operator

authentication-operator-7c6989d6c4-dkqc4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-api

machine-api-operator-84bf6db4f9-bncfj

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-84bf6db4f9-bncfj to master-0

openshift-oauth-apiserver

apiserver-74444d8fbc-g7z4w

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-74444d8fbc-g7z4w to master-0

openshift-operator-lifecycle-manager

catalog-operator-7d9c49f57b-8jr6f

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

catalog-operator-7d9c49f57b-8jr6f

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-8jr6f to master-0

openshift-cluster-storage-operator

csi-snapshot-controller-operator-5685fbc7d-5v8g4

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-5v8g4 to master-0

openshift-marketplace

redhat-operators-9j9zs

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-9j9zs to master-0

openshift-machine-config-operator

machine-config-controller-ff46b7bdf-z5fkp

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-ff46b7bdf-z5fkp to master-0

openshift-machine-config-operator

machine-config-daemon-k7pnc

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-k7pnc to master-0

openshift-marketplace

redhat-operators-mr22p

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-mr22p to master-0

openshift-operator-lifecycle-manager

olm-operator-d64cfc9db-8qtmf

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

csi-snapshot-controller-operator-5685fbc7d-5v8g4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-config-operator

machine-config-operator-fdb5c78b5-5nbfk

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-fdb5c78b5-5nbfk to master-0

openshift-multus

network-metrics-daemon-krv7c

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-krv7c to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-service-ca-operator

service-ca-operator-69b6fc6b88-p8hlq

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-69b6fc6b88-p8hlq to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-cluster-storage-operator

csi-snapshot-controller-7577d6f48-vd52m

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-vd52m to master-0

openshift-service-ca-operator

service-ca-operator-69b6fc6b88-p8hlq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

multus-dllkj

Scheduled

Successfully assigned openshift-multus/multus-dllkj to master-0

openshift-operator-lifecycle-manager

olm-operator-d64cfc9db-8qtmf

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-8qtmf to master-0

openshift-cluster-version

cluster-version-operator-745944c6b7-dcbvq

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-745944c6b7-dcbvq to master-0

openshift-operator-controller

operator-controller-controller-manager-6598bfb6c4-7nhvs

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-7nhvs to master-0

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-st8tx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-authentication

oauth-openshift-6df5fc69d-thc6n

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6df5fc69d-thc6n to master-0

openshift-service-ca

service-ca-84bfdbbb7f-bc2m2

Scheduled

Successfully assigned openshift-service-ca/service-ca-84bfdbbb7f-bc2m2 to master-0

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-w2q2q to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-8lgqf to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-nwttq to master-0

openshift-authentication

oauth-openshift-69dcf9d7fd-5tbt2

FailedScheduling

skip schedule deleting pod: openshift-authentication/oauth-openshift-69dcf9d7fd-5tbt2

openshift-operator-lifecycle-manager

package-server-manager-854648ff6d-phgxj

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

package-server-manager-854648ff6d-phgxj

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-phgxj to master-0

openshift-authentication

oauth-openshift-69dcf9d7fd-5tbt2

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-machine-config-operator

machine-config-server-wkt98

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-wkt98 to master-0

openshift-cluster-version

cluster-version-operator-8c9c967c7-vm7rj

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-8c9c967c7-vm7rj to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-marketplace

certified-operators-9nqqp

Scheduled

Successfully assigned openshift-marketplace/certified-operators-9nqqp to master-0

openshift-multus

cni-sysctl-allowlist-ds-85ss7

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-85ss7 to master-0

openshift-machine-api

machine-api-operator-84bf6db4f9-bncfj

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-84bf6db4f9-bncfj to master-0

openshift-route-controller-manager

route-controller-manager-5d7d75cbb9-lf8cw

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-5d7d75cbb9-lf8cw to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-68bd585b-7gtw2

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-7gtw2 to master-0

openshift-route-controller-manager

route-controller-manager-5d7d75cbb9-lf8cw

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-5d7d75cbb9-lf8cw

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-5d647dccbb-6cz8b

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-5d647dccbb-6cz8b to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-68bd585b-7gtw2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-route-controller-manager

route-controller-manager-5d647dccbb-6cz8b

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-qldx6

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qldx6 to master-0

openshift-authentication

oauth-openshift-5b6fc868c6-zc2fj

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-5b6fc868c6-zc2fj to master-0

openshift-authentication

oauth-openshift-5b6fc868c6-zc2fj

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-multus

multus-dllkj

Scheduled

Successfully assigned openshift-multus/multus-dllkj to master-0

openshift-monitoring

prometheus-operator-5ff8674d55-qxpv9

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-5ff8674d55-qxpv9 to master-0

openshift-insights

insights-operator-8f89dfddd-brq9l

Scheduled

Successfully assigned openshift-insights/insights-operator-8f89dfddd-brq9l to master-0

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-st8tx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-st8tx

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-st8tx to master-0

openshift-ingress-operator

ingress-operator-677db989d6-blw5x

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-677db989d6-blw5x to master-0

openshift-cloud-credential-operator

cloud-credential-operator-55d85b7b47-nrb7q

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-nrb7q to master-0

openshift-network-operator

network-operator-7c649bf6d4-st2sr

Scheduled

Successfully assigned openshift-network-operator/network-operator-7c649bf6d4-st2sr to master-0

openshift-monitoring

telemeter-client-6cfc594d97-x62fk

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-6cfc594d97-x62fk to master-0

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dpg4q

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-69576476f7-dpg4q to master-0

openshift-route-controller-manager

route-controller-manager-58959cd4d6-d985l

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-58959cd4d6-d985l to master-0

openshift-route-controller-manager

route-controller-manager-56f6fc54fd-nwfzl

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-56f6fc54fd-nwfzl to master-0

openshift-route-controller-manager

route-controller-manager-56f6fc54fd-nwfzl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-network-operator

mtu-prober-sbmgv

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-sbmgv to master-0

openshift-cluster-storage-operator

cluster-storage-operator-6fbfc8dc8f-sdsks

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-sdsks to master-0

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-8krst

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6686554ddc-8krst to master-0

openshift-ingress-operator

ingress-operator-677db989d6-blw5x

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-qldx6

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qldx6 to master-0

openshift-multus

multus-admission-controller-7769569c45-5n69x

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7769569c45-5n69x to master-0

openshift-marketplace

community-operators-6t5lg

Scheduled

Successfully assigned openshift-marketplace/community-operators-6t5lg to master-0

openshift-multus

multus-additional-cni-plugins-d5jxb

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-d5jxb to master-0

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dpg4q

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-69576476f7-dpg4q to master-0

openshift-cluster-node-tuning-operator

tuned-67jx5

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-67jx5 to master-0

openshift-dns

dns-default-jfjzg

Scheduled

Successfully assigned openshift-dns/dns-default-jfjzg to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-9vjl9

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9vjl9 to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-9vjl9

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9vjl9 to master-0

openshift-cluster-node-tuning-operator

tuned-67jx5

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-67jx5 to master-0

openshift-cluster-olm-operator

cluster-olm-operator-77899cf6d-r9zcq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-olm-operator

cluster-olm-operator-77899cf6d-r9zcq

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-r9zcq to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-9vjl9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-9vjl9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-5c74bfc494-bh886

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-dns

node-resolver-l9pkr

Scheduled

Successfully assigned openshift-dns/node-resolver-l9pkr to master-0

openshift-ingress-canary

ingress-canary-5qffz

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-5qffz to master-0

openshift-dns-operator

dns-operator-589895fbb7-gmvnl

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

router-default-79f8cd6fdd-r6nkv

Scheduled

Successfully assigned openshift-ingress/router-default-79f8cd6fdd-r6nkv to master-0

openshift-apiserver-operator

openshift-apiserver-operator-799b6db4d7-rj9cl

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-rj9cl to master-0

openshift-apiserver-operator

openshift-apiserver-operator-799b6db4d7-rj9cl

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

router-default-79f8cd6fdd-r6nkv

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ovn-kubernetes

ovnkube-node-2w9mf

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-2w9mf to master-0

openshift-marketplace

marketplace-operator-64bf9778cb-mgb5v

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-64bf9778cb-mgb5v to master-0

openshift-ingress

router-default-79f8cd6fdd-r6nkv

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

cluster-monitoring-operator-674cbfbd9d-cxs8s

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

cluster-monitoring-operator-674cbfbd9d-cxs8s

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-cxs8s to master-0

openshift-image-registry

node-ca-ttpzw

Scheduled

Successfully assigned openshift-image-registry/node-ca-ttpzw to master-0

openshift-marketplace

marketplace-operator-64bf9778cb-mgb5v

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

kube-state-metrics-68b88f8cb5-qjxhc

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-68b88f8cb5-qjxhc to master-0

openshift-monitoring

metrics-server-6474759988-dnw4m

Scheduled

Successfully assigned openshift-monitoring/metrics-server-6474759988-dnw4m to master-0

openshift-monitoring

metrics-server-7b45f5889c-z48tj

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7b45f5889c-z48tj to master-0

openshift-monitoring

monitoring-plugin-6db79546f6-gdz4k

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-6db79546f6-gdz4k to master-0

openshift-monitoring

node-exporter-bx9dn

Scheduled

Successfully assigned openshift-monitoring/node-exporter-bx9dn to master-0

openshift-controller-manager

controller-manager-5ddc94864c-7nwdc

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-multus

multus-admission-controller-8d675b596-jgdmb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

cluster-monitoring-operator-674cbfbd9d-cxs8s

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-cxs8s to master-0

openshift-monitoring

cluster-monitoring-operator-674cbfbd9d-cxs8s

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-image-registry

cluster-image-registry-operator-86d6d77c7c-k7dp2

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-k7dp2 to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-apiserver

apiserver-85cb8cb9bb-bmx44

Scheduled

Successfully assigned openshift-apiserver/apiserver-85cb8cb9bb-bmx44 to master-0

openshift-apiserver

apiserver-85cb8cb9bb-bmx44

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-ovn-kubernetes

ovnkube-control-plane-66b55d57d-m77x2

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-m77x2 to master-0

openshift-cluster-machine-approver

machine-approver-754bdc9f9d-xpl2b

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-754bdc9f9d-xpl2b to master-0

openshift-apiserver

apiserver-65677d845c-495g9

Scheduled

Successfully assigned openshift-apiserver/apiserver-65677d845c-495g9 to master-0

openshift-monitoring

prometheus-operator-5ff8674d55-qxpv9

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-5ff8674d55-qxpv9 to master-0

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-st8tx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-st8tx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-st8tx

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-st8tx to master-0

openshift-monitoring

telemeter-client-6cfc594d97-x62fk

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-6cfc594d97-x62fk to master-0

openshift-monitoring

thanos-querier-5cd89459d5-wwnjs

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-5cd89459d5-wwnjs to master-0

openshift-route-controller-manager

route-controller-manager-544c885f6d-dr4gh

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-544c885f6d-dr4gh to master-0

openshift-route-controller-manager

route-controller-manager-544c885f6d-dr4gh

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-544c885f6d-dr4gh

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-multus

cni-sysctl-allowlist-ds-85ss7

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-85ss7 to master-0

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-5c74bfc494-bh886

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-bh886 to master-0

openshift-dns-operator

dns-operator-589895fbb7-gmvnl

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-589895fbb7-gmvnl to master-0

openshift-multus

multus-additional-cni-plugins-d5jxb

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-d5jxb to master-0

openshift-config-operator

openshift-config-operator-64488f9d78-vnl28

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-config-operator

openshift-config-operator-64488f9d78-vnl28

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-64488f9d78-vnl28 to master-0

openshift-cluster-samples-operator

cluster-samples-operator-664cb58b85-8lf4q

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8lf4q to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-network-operator

iptables-alerter-rfnqf

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-rfnqf to master-0

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-7f65c457f5-st7mk

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-st7mk to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-86d7cdfdfb-pfdrx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-86d7cdfdfb-pfdrx

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-pfdrx to master-0

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-7f65c457f5-st7mk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-console

console-5c84b9c874-8xl2l

Scheduled

Successfully assigned openshift-console/console-5c84b9c874-8xl2l to master-0

openshift-console

console-6479f6d896-j6kqz

Scheduled

Successfully assigned openshift-console/console-6479f6d896-j6kqz to master-0

openshift-console

console-6787d8db86-xxqwp

Scheduled

Successfully assigned openshift-console/console-6787d8db86-xxqwp to master-0

openshift-monitoring

openshift-state-metrics-74cc79fd76-s9b9v

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-74cc79fd76-s9b9v to master-0

openshift-console

console-6dc96f5b89-ctlsc

Scheduled

Successfully assigned openshift-console/console-6dc96f5b89-ctlsc to master-0

openshift-console

console-76c777474b-n9mhf

Scheduled

Successfully assigned openshift-console/console-76c777474b-n9mhf to master-0

openshift-console

console-c45bf598-vngbg

Scheduled

Successfully assigned openshift-console/console-c45bf598-vngbg to master-0

openshift-monitoring

node-exporter-bx9dn

Scheduled

Successfully assigned openshift-monitoring/node-exporter-bx9dn to master-0

openshift-network-node-identity

network-node-identity-m7549

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-m7549 to master-0

openshift-monitoring

monitoring-plugin-6db79546f6-gdz4k

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-6db79546f6-gdz4k to master-0

openshift-console

downloads-84f57b9877-8g27w

Scheduled

Successfully assigned openshift-console/downloads-84f57b9877-8g27w to master-0

openshift-console-operator

console-operator-6c7fb6b958-db7d8

Scheduled

Successfully assigned openshift-console-operator/console-operator-6c7fb6b958-db7d8 to master-0

openshift-monitoring

metrics-server-7b45f5889c-z48tj

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7b45f5889c-z48tj to master-0

openshift-monitoring

metrics-server-6474759988-dnw4m

Scheduled

Successfully assigned openshift-monitoring/metrics-server-6474759988-dnw4m to master-0

openshift-controller-manager

controller-manager-5b4bdf67b6-8rdjs

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-5b4bdf67b6-8rdjs

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-network-diagnostics

network-check-target-w5fjg

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-w5fjg to master-0

openshift-controller-manager

controller-manager-5b4bdf67b6-8rdjs

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5b4bdf67b6-8rdjs to master-0

openshift-monitoring

openshift-state-metrics-74cc79fd76-s9b9v

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-74cc79fd76-s9b9v to master-0

assisted-installer

assisted-installer-controller-v949k

FailedScheduling

no nodes available to schedule pods

openshift-image-registry

cluster-image-registry-operator-86d6d77c7c-k7dp2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

multus-admission-controller-8d675b596-jgdmb

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-8d675b596-jgdmb to master-0

openshift-controller-manager

controller-manager-5ddc94864c-7nwdc

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5ddc94864c-7nwdc to master-0

openshift-cluster-machine-approver

machine-approver-955fcfb87-rh4g5

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-955fcfb87-rh4g5 to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-8565d84698-49hzm

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-49hzm to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-8565d84698-49hzm

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ovn-kubernetes

ovnkube-node-tf5qg

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-tf5qg to master-0

openshift-network-diagnostics

network-check-source-7c67b67d47-sctv9

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-7c67b67d47-sctv9 to master-0

openshift-network-diagnostics

network-check-source-7c67b67d47-sctv9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

multus-admission-controller-8d675b596-jgdmb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-diagnostics

network-check-source-7c67b67d47-sctv9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-8597858f97-kb2l8

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-8597858f97-kb2l8 to master-0

openshift-etcd-operator

etcd-operator-5884b9cd56-27phk

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-5884b9cd56-27phk to master-0

openshift-kube-storage-version-migrator

migrator-57ccdf9b5-tbcsh

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-57ccdf9b5-tbcsh to master-0

openshift-etcd-operator

etcd-operator-5884b9cd56-27phk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

packageserver-9c44c86f9-rplwv

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-9c44c86f9-rplwv to master-0

openshift-network-console

networking-console-plugin-5cbd49d755-69bg7

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-5cbd49d755-69bg7 to master-0

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-w2q2q to master-0

openshift-controller-manager

controller-manager-6b549b45d9-fhqdk

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-6b549b45d9-fhqdk

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-multus

network-metrics-daemon-krv7c

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-krv7c to master-0

openshift-monitoring

kube-state-metrics-68b88f8cb5-qjxhc

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-68b88f8cb5-qjxhc to master-0

openshift-multus

multus-admission-controller-7769569c45-5n69x

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7769569c45-5n69x to master-0

openshift-marketplace

community-operators-ms5vp

Scheduled

Successfully assigned openshift-marketplace/community-operators-ms5vp to master-0

openshift-controller-manager

controller-manager-6f7fd6c796-tlbts

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6f7fd6c796-tlbts to master-0

openshift-controller-manager

controller-manager-7775b8f858-tgbrj

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-7775b8f858-tgbrj to master-0

openshift-controller-manager

controller-manager-7f9d55fb8-5ndvl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-7f9d55fb8-5ndvl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-7f9d55fb8-5ndvl

FailedScheduling

skip schedule deleting pod: openshift-controller-manager/controller-manager-7f9d55fb8-5ndvl

openshift-controller-manager

controller-manager-8597858f97-kb2l8

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

kube-system

Required control plane pods have been created

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_e89ba1ef-17ed-4de1-9833-4ba5bd2d82d1 became leader

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_f8044801-6f26-4ac1-bc0b-a06cd6b024a8 became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_9a2a2fa0-bdd6-41af-b172-939e4728c1fc became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_1be4e474-a8c5-46cd-aaae-b45ae1e3fd41 became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_e0614686-6215-4af8-a5a9-d28100d45b8a became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace
(x2)

assisted-installer

job-controller

assisted-installer-controller

FailedCreate

Error creating: pods "assisted-installer-controller-" is forbidden: error looking up service account assisted-installer/assisted-installer-controller: serviceaccount "assisted-installer-controller" not found

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-v949k

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_0954e1bb-36cb-4644-a64f-0f79ddd8aa69 became leader

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-745944c6b7 to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_78a2b761-af2e-497e-b19a-c4cfd2e848bd became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-77899cf6d to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-7f65c457f5 to 1

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-86d7cdfdfb to 1

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-5c74bfc494 to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-589895fbb7 to 1

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-7c649bf6d4 to 1

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-69b6fc6b88 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-8565d84698 to 1

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-799b6db4d7 to 1

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-64bf9778cb to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-7c6989d6c4 to 1
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-5884b9cd56 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace
(x9)

assisted-installer

default-scheduler

assisted-installer-controller-v949k

FailedScheduling

no nodes available to schedule pods
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77899cf6d

FailedCreate

Error creating: pods "cluster-olm-operator-77899cf6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-7f65c457f5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-7f65c457f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-network-operator

replicaset-controller

network-operator-7c649bf6d4

FailedCreate

Error creating: pods "network-operator-7c649bf6d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-86d7cdfdfb

FailedCreate

Error creating: pods "kube-controller-manager-operator-86d7cdfdfb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-589895fbb7

FailedCreate

Error creating: pods "dns-operator-589895fbb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5c74bfc494

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5c74bfc494-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-799b6db4d7

FailedCreate

Error creating: pods "openshift-apiserver-operator-799b6db4d7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-69b6fc6b88

FailedCreate

Error creating: pods "service-ca-operator-69b6fc6b88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-8565d84698

FailedCreate

Error creating: pods "openshift-controller-manager-operator-8565d84698-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-5884b9cd56

FailedCreate

Error creating: pods "etcd-operator-5884b9cd56-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-64bf9778cb

FailedCreate

Error creating: pods "marketplace-operator-64bf9778cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-5685fbc7d to 1
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-7c6989d6c4

FailedCreate

Error creating: pods "authentication-operator-7c6989d6c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-674cbfbd9d to 1

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-854648ff6d to 1

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-66c7586884 to 1

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-674cbfbd9d to 1

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-66c7586884 to 1
(x10)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-854648ff6d

FailedCreate

Error creating: pods "package-server-manager-854648ff6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-677db989d6 to 1
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-7d9c49f57b to 1

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-d64cfc9db to 1
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

FailedCreate

Error creating: pods "cluster-version-operator-745944c6b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-68bd585b to 1
(x10)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-5685fbc7d

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-5685fbc7d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving
(x9)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68bd585b

FailedCreate

Error creating: pods "kube-apiserver-operator-68bd585b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished
(x8)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7d9c49f57b

FailedCreate

Error creating: pods "catalog-operator-7d9c49f57b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-64488f9d78 to 1

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-ingress-operator

replicaset-controller

ingress-operator-677db989d6

FailedCreate

Error creating: pods "ingress-operator-677db989d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-86d6d77c7c to 1
(x8)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-86d6d77c7c

FailedCreate

Error creating: pods "cluster-image-registry-operator-86d6d77c7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-d64cfc9db

FailedCreate

Error creating: pods "olm-operator-d64cfc9db-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x2)

openshift-config-operator

replicaset-controller

openshift-config-operator-64488f9d78

FailedCreate

Error creating: pods "openshift-config-operator-64488f9d78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

kube-system

Required control plane pods have been created

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_28d3f6f6-67f4-4d12-8436-52560bac809e became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_55ab31a6-d6ff-4ba6-8f04-6f9cdaaa8ab9 became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_f2c2a6b7-73d6-4787-9fe1-ca7e34d0f898 became leader

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x7)

openshift-etcd-operator

replicaset-controller

etcd-operator-5884b9cd56

FailedCreate

Error creating: pods "etcd-operator-5884b9cd56-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-authentication-operator

replicaset-controller

authentication-operator-7c6989d6c4

FailedCreate

Error creating: pods "authentication-operator-7c6989d6c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-8565d84698

FailedCreate

Error creating: pods "openshift-controller-manager-operator-8565d84698-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-799b6db4d7

FailedCreate

Error creating: pods "openshift-apiserver-operator-799b6db4d7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68bd585b

FailedCreate

Error creating: pods "kube-apiserver-operator-68bd585b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-marketplace

replicaset-controller

marketplace-operator-64bf9778cb

FailedCreate

Error creating: pods "marketplace-operator-64bf9778cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-86d7cdfdfb

FailedCreate

Error creating: pods "kube-controller-manager-operator-86d7cdfdfb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5c74bfc494

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5c74bfc494-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-86d6d77c7c

FailedCreate

Error creating: pods "cluster-image-registry-operator-86d6d77c7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-ingress-operator

replicaset-controller

ingress-operator-677db989d6

FailedCreate

Error creating: pods "ingress-operator-677db989d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-config-operator

replicaset-controller

openshift-config-operator-64488f9d78

FailedCreate

Error creating: pods "openshift-config-operator-64488f9d78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-dns-operator

replicaset-controller

dns-operator-589895fbb7

FailedCreate

Error creating: pods "dns-operator-589895fbb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-network-operator

replicaset-controller

network-operator-7c649bf6d4

FailedCreate

Error creating: pods "network-operator-7c649bf6d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

FailedCreate

Error creating: pods "cluster-version-operator-745944c6b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-7f65c457f5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-7f65c457f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-d64cfc9db

FailedCreate

Error creating: pods "olm-operator-d64cfc9db-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-5685fbc7d

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-5685fbc7d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-854648ff6d

FailedCreate

Error creating: pods "package-server-manager-854648ff6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7d9c49f57b

FailedCreate

Error creating: pods "catalog-operator-7d9c49f57b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-8565d84698

SuccessfulCreate

Created pod: openshift-controller-manager-operator-8565d84698-49hzm
(x8)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77899cf6d

FailedCreate

Error creating: pods "cluster-olm-operator-77899cf6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x5)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-69b6fc6b88

FailedCreate

Error creating: pods "service-ca-operator-69b6fc6b88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-config-operator

replicaset-controller

openshift-config-operator-64488f9d78

SuccessfulCreate

Created pod: openshift-config-operator-64488f9d78-vnl28

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-86d6d77c7c

SuccessfulCreate

Created pod: cluster-image-registry-operator-86d6d77c7c-k7dp2

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-86d7cdfdfb

SuccessfulCreate

Created pod: kube-controller-manager-operator-86d7cdfdfb-pfdrx

openshift-network-operator

replicaset-controller

network-operator-7c649bf6d4

SuccessfulCreate

Created pod: network-operator-7c649bf6d4-st2sr

openshift-etcd-operator

replicaset-controller

etcd-operator-5884b9cd56

SuccessfulCreate

Created pod: etcd-operator-5884b9cd56-27phk

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68bd585b

SuccessfulCreate

Created pod: kube-apiserver-operator-68bd585b-7gtw2

openshift-authentication-operator

replicaset-controller

authentication-operator-7c6989d6c4

SuccessfulCreate

Created pod: authentication-operator-7c6989d6c4-dkqc4

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-799b6db4d7

SuccessfulCreate

Created pod: openshift-apiserver-operator-799b6db4d7-rj9cl

openshift-ingress-operator

replicaset-controller

ingress-operator-677db989d6

SuccessfulCreate

Created pod: ingress-operator-677db989d6-blw5x

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-dns-operator

replicaset-controller

dns-operator-589895fbb7

SuccessfulCreate

Created pod: dns-operator-589895fbb7-gmvnl

openshift-service-ca-operator

replicaset-controller

service-ca-operator-69b6fc6b88

SuccessfulCreate

Created pod: service-ca-operator-69b6fc6b88-p8hlq

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-d64cfc9db

SuccessfulCreate

Created pod: olm-operator-d64cfc9db-8qtmf

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

SuccessfulCreate

Created pod: cluster-monitoring-operator-674cbfbd9d-cxs8s

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-7f65c457f5

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-7f65c457f5-st7mk

openshift-marketplace

replicaset-controller

marketplace-operator-64bf9778cb

SuccessfulCreate

Created pod: marketplace-operator-64bf9778cb-mgb5v

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

SuccessfulCreate

Created pod: cluster-version-operator-745944c6b7-dcbvq

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-854648ff6d

SuccessfulCreate

Created pod: package-server-manager-854648ff6d-phgxj

openshift-network-operator

kubelet

network-operator-7c649bf6d4-st2sr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3"

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7d9c49f57b

SuccessfulCreate

Created pod: catalog-operator-7d9c49f57b-8jr6f

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

SuccessfulCreate

Created pod: cluster-monitoring-operator-674cbfbd9d-cxs8s

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5c74bfc494

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-5c74bfc494-bh886

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-5685fbc7d

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-5685fbc7d-5v8g4

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

SuccessfulCreate

Created pod: cluster-node-tuning-operator-66c7586884-9vjl9

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77899cf6d

SuccessfulCreate

Created pod: cluster-olm-operator-77899cf6d-r9zcq

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

SuccessfulCreate

Created pod: cluster-node-tuning-operator-66c7586884-9vjl9

openshift-network-operator

kubelet

network-operator-7c649bf6d4-st2sr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" in 3.43s (3.43s including waiting). Image size: 621647686 bytes.
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)

assisted-installer

kubelet

assisted-installer-controller-v949k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef"

openshift-network-operator

kubelet

network-operator-7c649bf6d4-st2sr

Started

Started container network-operator

openshift-network-operator

kubelet

network-operator-7c649bf6d4-st2sr

Created

Created container: network-operator

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_0cfeef19-3a79-491b-ab67-9de5c1b2bad7 became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-sbmgv

assisted-installer

kubelet

assisted-installer-controller-v949k

Created

Created container: assisted-installer-controller

openshift-network-operator

kubelet

mtu-prober-sbmgv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine

assisted-installer

kubelet

assisted-installer-controller-v949k

Started

Started container assisted-installer-controller

assisted-installer

kubelet

assisted-installer-controller-v949k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef" in 4.809s (4.809s including waiting). Image size: 687947017 bytes.

openshift-network-operator

kubelet

mtu-prober-sbmgv

Created

Created container: prober

openshift-network-operator

kubelet

mtu-prober-sbmgv

Started

Started container prober

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio

openshift-multus

kubelet

multus-dllkj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192"

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-d5jxb

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-d5jxb

openshift-multus

kubelet

multus-dllkj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916"

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-dllkj

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-dllkj

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916"

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-krv7c

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-krv7c

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-8d675b596 to 1

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-8d675b596 to 1

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulCreate

Created pod: multus-admission-controller-8d675b596-jgdmb

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulCreate

Created pod: multus-admission-controller-8d675b596-jgdmb

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916" in 6.105s (6.105s including waiting). Image size: 528946249 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916" in 6.105s (6.105s including waiting). Image size: 528946249 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: egress-router-binary-copy

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-66b55d57d to 1

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-tf5qg

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-66b55d57d

SuccessfulCreate

Created pod: ovnkube-control-plane-66b55d57d-m77x2

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-multus

kubelet

multus-dllkj

Created

Created container: kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Started

Started container kube-rbac-proxy

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-7c67b67d47 to 1

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container cni-plugins

openshift-multus

kubelet

multus-dllkj

Started

Started container kube-multus

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: cni-plugins

openshift-network-diagnostics

replicaset-controller

network-check-source-7c67b67d47

SuccessfulCreate

Created pod: network-check-source-7c67b67d47-sctv9

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245" in 7.037s (7.037s including waiting). Image size: 683169303 bytes.

openshift-multus

kubelet

multus-dllkj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" in 13.802s (13.802s including waiting). Image size: 1238047254 bytes.

openshift-multus

kubelet

multus-dllkj

Started

Started container kube-multus

openshift-multus

kubelet

multus-dllkj

Created

Created container: kube-multus

openshift-multus

kubelet

multus-dllkj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" in 13.802s (13.802s including waiting). Image size: 1238047254 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245" in 7.037s (7.037s including waiting). Image size: 683169303 bytes.

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-w5fjg

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7" in 1.823s (1.823s including waiting). Image size: 411585608 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7" in 1.823s (1.823s including waiting). Image size: 411585608 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7"

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-m7549

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7"

openshift-network-node-identity

kubelet

network-node-identity-m7549

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7" in 1.465s (1.465s including waiting). Image size: 407347126 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7" in 1.465s (1.465s including waiting). Image size: 407347126 bytes.

openshift-network-node-identity

kubelet

network-node-identity-m7549

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a"

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a"
(x7)

openshift-multus

kubelet

network-metrics-daemon-krv7c

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered
(x7)

openshift-multus

kubelet

network-metrics-daemon-krv7c

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered
(x18)

openshift-multus

kubelet

network-metrics-daemon-krv7c

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x18)

openshift-multus

kubelet

network-metrics-daemon-krv7c

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" in 23.604s (23.604s including waiting). Image size: 1637445817 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: kubecfg-setup

openshift-network-node-identity

master-0_60f751b5-b0d6-4e62-a9de-028caece2c13

ovnkube-identity

LeaderElection

master-0_60f751b5-b0d6-4e62-a9de-028caece2c13 became leader

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-m7549

Created

Created container: approver

openshift-network-node-identity

kubelet

network-node-identity-m7549

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-m7549

Started

Started container webhook

openshift-network-node-identity

kubelet

network-node-identity-m7549

Created

Created container: webhook

openshift-network-node-identity

kubelet

network-node-identity-m7549

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" in 19.841s (19.841s including waiting). Image size: 1637445817 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" in 17.684s (17.684s including waiting). Image size: 876146500 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Started

Started container ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" in 24.549s (24.549s including waiting). Image size: 1637445817 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" in 17.684s (17.684s including waiting). Image size: 876146500 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: whereabouts-cni-bincopy

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-66b55d57d-m77x2 became leader

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: whereabouts-cni

openshift-network-node-identity

kubelet

network-node-identity-m7549

Started

Started container approver

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-d5jxb

Created

Created container: kube-multus-additional-cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-tf5qg

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-tf5qg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-2w9mf

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-dcbvq

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container nbdb
(x7)

openshift-network-diagnostics

kubelet

network-check-target-w5fjg

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-wh9cz" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-2w9mf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine
(x18)

openshift-network-diagnostics

kubelet

network-check-target-w5fjg

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-945gf

CSRApproved

CSR "csr-945gf" has been approved

default

ovnkube-csr-approver-controller

csr-rk95c

CSRApproved

CSR "csr-rk95c" has been approved

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-rfnqf

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-68bd585b-7gtw2

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-8565d84698-49hzm

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-apiserver-operator

multus

openshift-apiserver-operator-799b6db4d7-rj9cl

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-5685fbc7d-5v8g4

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-49hzm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b"

openshift-etcd-operator

multus

etcd-operator-5884b9cd56-27phk

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-7f65c457f5-st7mk

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-5c74bfc494-bh886

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783"

openshift-cluster-olm-operator

multus

cluster-olm-operator-77899cf6d-r9zcq

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3"

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953"

openshift-authentication-operator

multus

authentication-operator-7c6989d6c4-dkqc4

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-p8hlq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba"

openshift-service-ca-operator

multus

service-ca-operator-69b6fc6b88-p8hlq

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-5v8g4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3"

openshift-config-operator

multus

openshift-config-operator-64488f9d78-vnl28

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43"

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-st7mk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9"

openshift-network-operator

kubelet

iptables-alerter-rfnqf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460"

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-bh886

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282"

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-rj9cl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab"

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-pfdrx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56"

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-86d7cdfdfb-pfdrx

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-7gtw2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-7gtw2

Created

Created container: kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-7gtw2

Started

Started container kube-apiserver-operator

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-68bd585b-7gtw2_7d5ed667-a8ab-4f8e-9482-171b0e2e4dfc became leader

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.34"

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.34"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x5)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-k7dp2

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x5)

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x5)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed
(x5)

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-8qtmf

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-8jr6f

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x5)

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Started

Started container copy-catalogd-manifests

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43" already present on machine

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Created

Created container: openshift-api

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Started

Started container openshift-api

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5"

openshift-network-operator

kubelet

iptables-alerter-rfnqf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" already present on machine

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-rj9cl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab"

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-rj9cl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab" in 511ms (511ms including waiting). Image size: 512273539 bytes.

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-pfdrx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" in 515ms (515ms including waiting). Image size: 508888174 bytes.

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-pfdrx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953" in 390ms (390ms including waiting). Image size: 513220825 bytes.

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-5v8g4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3" already present on machine

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-5v8g4

Created

Created container: csi-snapshot-controller-operator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Created

Created container: copy-catalogd-manifests

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-5v8g4

Started

Started container csi-snapshot-controller-operator

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-7577d6f48

SuccessfulCreate

Created pod: csi-snapshot-controller-7577d6f48-vd52m

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-5685fbc7d-5v8g4_63bc01bf-6d1a-4015-9391-36b7749123d6 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-7577d6f48 to 1

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-7f65c457f5-st7mk_fd8d762e-6717-4a13-9191-4efc0276abb9 became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.34"

openshift-network-diagnostics

kubelet

network-check-target-w5fjg

Started

Started container network-check-target-container

openshift-network-diagnostics

kubelet

network-check-target-w5fjg

Created

Created container: network-check-target-container

openshift-network-diagnostics

kubelet

network-check-target-w5fjg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine

openshift-network-diagnostics

multus

network-check-target-w5fjg

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-8565d84698-49hzm_5cba68f7-dd6a-4b70-b4c1-79daafc0df35 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "build": map[string]any{ +  "buildDefaults": map[string]any{"resources": map[string]any{}}, +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e95c47e9d"...), +  }, +  }, +  "controllers": []any{ +  string("openshift.io/build"), string("openshift.io/build-config-change"), +  string("openshift.io/builder-rolebindings"), +  string("openshift.io/builder-serviceaccount"), +  string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +  string("openshift.io/deployer-rolebindings"), +  string("openshift.io/deployer-serviceaccount"), +  string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +  string("openshift.io/image-puller-rolebindings"), +  string("openshift.io/image-signature-import"), +  string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +  string("openshift.io/ingress-to-route"), +  string("openshift.io/origin-namespace"), ..., +  }, +  "deployer": map[string]any{ +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52d35a623b"...), +  }, +  }, +  "featureGates": []any{string("BuildCSIVolumes=true")}, +  "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc"

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7c6989d6c4-dkqc4_ae339e2c-f2f2-4a59-819d-76e1c2985a2d became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-69b6fc6b88-p8hlq_6fea134e-56ea-4758-90f4-57305dd4e088 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-58959cd4d6 to 1

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-86d7cdfdfb-pfdrx_4757b65d-8445-4fe6-85a9-7123326886f6 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.34"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.34"}]

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6f7fd6c796 to 1
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-controller-manager

replicaset-controller

controller-manager-6f7fd6c796

SuccessfulCreate

Created pod: controller-manager-6f7fd6c796-tlbts

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.34"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-rlbzd")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5c74bfc494-bh886_f8cd63bc-133e-4d24-8a5f-8268a9ad5ecf became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.34"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.34"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.34"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well")
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-58959cd4d6

SuccessfulCreate

Created pod: route-controller-manager-58959cd4d6-d985l

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.34"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-7577d6f48-vd52m

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5884b9cd56-27phk_b2bf07b9-9cf1-46d6-93b2-609ddaa6817b became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-799b6db4d7-rj9cl_372044af-01c9-43bc-832f-a1680c8bde1e became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-storage-version-migrator

replicaset-controller

migrator-57ccdf9b5

SuccessfulCreate

Created pod: migrator-57ccdf9b5-tbcsh

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-57ccdf9b5 to 1

openshift-network-operator

kubelet

iptables-alerter-rfnqf

Started

Started container iptables-alerter
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-58959cd4d6-d985l

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.34"

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "controlPlane": map[string]any{"replicas": float64(1)}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-network-operator

kubelet

iptables-alerter-rfnqf

Created

Created container: iptables-alerter

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found",Available changed from Unknown to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found"
(x2)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-tlbts

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found
(x2)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-tlbts

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-84bfdbbb7f to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-56f6fc54fd to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-58959cd4d6 to 0 from 1
(x3)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-tlbts

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x3)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-tlbts

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found")

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-controller-manager

replicaset-controller

controller-manager-6f7fd6c796

SuccessfulDelete

Deleted pod: controller-manager-6f7fd6c796-tlbts

openshift-service-ca

replicaset-controller

service-ca-84bfdbbb7f

SuccessfulCreate

Created pod: service-ca-84bfdbbb7f-bc2m2

openshift-controller-manager

replicaset-controller

controller-manager-7f9d55fb8

SuccessfulCreate

Created pod: controller-manager-7f9d55fb8-5ndvl

openshift-route-controller-manager

replicaset-controller

route-controller-manager-58959cd4d6

SuccessfulDelete

Deleted pod: route-controller-manager-58959cd4d6-d985l
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-58959cd4d6-d985l

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-58959cd4d6-d985l

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6f7fd6c796 to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7f9d55fb8 to 1 from 0

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-56f6fc54fd

SuccessfulCreate

Created pod: route-controller-manager-56f6fc54fd-nwfzl

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc" in 2.987s (2.987s including waiting). Image size: 495064829 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5" in 4.036s (4.036s including waiting). Image size: 495994161 bytes.

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, }

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-bc2m2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" already present on machine

openshift-service-ca

multus

service-ca-84bfdbbb7f-bc2m2

AddedInterface

Add eth0 [10.128.0.31/23] from ovn-kubernetes

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-tbcsh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053"

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-7577d6f48-vd52m

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-7577d6f48-vd52m became leader

openshift-kube-storage-version-migrator

multus

migrator-57ccdf9b5-tbcsh

AddedInterface

Add eth0 [10.128.0.30/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1" in 2.836s (2.836s including waiting). Image size: 463700811 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well")

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Started

Started container openshift-config-operator

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Created

Created container: openshift-config-operator

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Started

Started container copy-operator-controller-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Created

Created container: copy-operator-controller-manifests

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6"

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-bc2m2

Created

Created container: service-ca-controller

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-tlbts

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager"/"serving-cert" not registered

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-tlbts

FailedMount

MountVolume.SetUp failed for volume "client-ca" : object "openshift-controller-manager"/"client-ca" not registered

openshift-route-controller-manager

kubelet

route-controller-manager-58959cd4d6-d985l

FailedMount

MountVolume.SetUp failed for volume "client-ca" : object "openshift-route-controller-manager"/"client-ca" not registered

openshift-route-controller-manager

kubelet

route-controller-manager-58959cd4d6-d985l

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : object "openshift-route-controller-manager"/"serving-cert" not registered

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed
(x5)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing
(x5)

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-dcbvq

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-k7dp2

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing
(x5)

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-bc2m2

Started

Started container service-ca-controller
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-84bfdbbb7f-bc2m2_de055ee0-3137-44cb-824b-8d388244d1f6 became leader
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2026-03-08 00:21:33 +0000 UTC AsExpected } {OperatorProgressing False 2026-03-08 00:21:33 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-03-08 00:21:33 +0000 UTC AsExpected }]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-drllc" has been approved

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.34"} {"feature-gates" "4.18.34"}]
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.34"
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.34"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-64488f9d78-vnl28_c5ac7b6d-5eeb-4f0a-b312-ed142799e688 became leader
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-drllc" is created for OpenShiftAuthenticatorCertRequester

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-tbcsh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-tbcsh

Started

Started container migrator

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-tbcsh

Created

Created container: migrator

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7f9d55fb8 to 0 from 1

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-tbcsh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053" in 4.356s (4.356s including waiting). Image size: 443271011 bytes.
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-rlbzd")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, }

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.34"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-7f9d55fb8

SuccessfulDelete

Deleted pod: controller-manager-7f9d55fb8-5ndvl

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7775b8f858 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-tbcsh

Started

Started container graceful-termination

openshift-controller-manager

replicaset-controller

controller-manager-7775b8f858

SuccessfulCreate

Created pod: controller-manager-7775b8f858-tgbrj

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-tbcsh

Created

Created container: graceful-termination

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.34"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.34"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.34"} {"csi-snapshot-controller" "4.18.34"}]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6" in 6.667s (6.667s including waiting). Image size: 511164376 bytes.

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7775b8f858 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreateFailed

Failed to create Secret/etcd-client -n openshift-apiserver: secrets "etcd-client" already exists

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nResourceSyncControllerDegraded: secrets \"etcd-client\" already exists"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nResourceSyncControllerDegraded: secrets \"etcd-client\" already exists" to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "
(x41)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing
(x6)

openshift-multus

kubelet

network-metrics-daemon-krv7c

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found
(x4)

openshift-controller-manager

kubelet

controller-manager-7775b8f858-tgbrj

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x6)

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-8qtmf

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-56f6fc54fd-nwfzl

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x6)

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x6)

openshift-multus

kubelet

network-metrics-daemon-krv7c

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found
(x6)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x6)

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x6)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-8jr6f

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x6)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well")
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-k7dp2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-dcbvq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.34"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-77899cf6d-r9zcq_e82a6c38-b015-4d31-8edf-c5259849971d became leader

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-66c7586884-9vjl9

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70"

openshift-image-registry

multus

cluster-image-registry-operator-86d6d77c7c-k7dp2

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-65677d845c to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-ingress-operator

multus

ingress-operator-677db989d6-blw5x

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0"

openshift-controller-manager

replicaset-controller

controller-manager-7775b8f858

SuccessfulDelete

Deleted pod: controller-manager-7775b8f858-tgbrj
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-apiserver

replicaset-controller

apiserver-65677d845c

SuccessfulCreate

Created pod: apiserver-65677d845c-495g9

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-dns-operator

multus

dns-operator-589895fbb7-gmvnl

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-66c7586884-9vjl9

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-8597858f97 to 1 from 0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

openshift-controller-manager

replicaset-controller

controller-manager-8597858f97

SuccessfulCreate

Created pod: controller-manager-8597858f97-kb2l8

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n"
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreateFailed

Failed to create ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role: client rate limiter Wait returned an error: context canceled

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "admission": map[string]any{ + "pluginConfig": map[string]any{ + "PodSecurity": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, + }, + }, + "apiServerArguments": map[string]any{ + "api-audiences": []any{string("https://kubernetes.default.svc")}, + "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "goaway-chance": []any{string("0")}, + "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, + "send-retry-after-while-not-ready-once": []any{string("true")}, + "service-account-issuer": []any{string("https://kubernetes.default.svc")}, + "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, + "shutdown-delay-duration": []any{string("0s")}, + }, + "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "gracefulTerminationDuration": string("15"), + "servicesSubnet": string("172.30.0.0/16"), + "servingInfo": map[string]any{ + "bindAddress": string("0.0.0.0:6443"), + "bindNetwork": string("tcp4"), + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + "namedCertificates": []any{ + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resou"...), + "keyFile": string("/etc/kubernetes/static-pod-resou"...), + }, + }, + }, }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"
(x4)

openshift-apiserver

kubelet

apiserver-65677d845c-495g9

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x101)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Created

Created container: cluster-olm-operator
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-r9zcq

Started

Started container cluster-olm-operator

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready"
(x4)

openshift-controller-manager

kubelet

controller-manager-8597858f97-kb2l8

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found"
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-56f6fc54fd-nwfzl

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x5)

openshift-apiserver

kubelet

apiserver-65677d845c-495g9

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-77899cf6d-r9zcq_7ba9f3e6-ad36-4093-8a90-b4a28f339e2d became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-85cb8cb9bb to 1 from 0

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-65677d845c to 0 from 1

openshift-apiserver

replicaset-controller

apiserver-85cb8cb9bb

SuccessfulCreate

Created pod: apiserver-85cb8cb9bb-bmx44

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-apiserver

replicaset-controller

apiserver-65677d845c

SuccessfulDelete

Deleted pod: apiserver-65677d845c-495g9

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-8597858f97 to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6b549b45d9 to 1 from 0

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-6b549b45d9

SuccessfulCreate

Created pod: controller-manager-6b549b45d9-fhqdk

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-controller-manager

replicaset-controller

controller-manager-8597858f97

SuccessfulDelete

Deleted pod: controller-manager-8597858f97-kb2l8

openshift-route-controller-manager

replicaset-controller

route-controller-manager-56f6fc54fd

SuccessfulDelete

Deleted pod: route-controller-manager-56f6fc54fd-nwfzl

openshift-oauth-apiserver

replicaset-controller

apiserver-74444d8fbc

SuccessfulCreate

Created pod: apiserver-74444d8fbc-g7z4w

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-5d7d75cbb9 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-56f6fc54fd to 0 from 1

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-74444d8fbc to 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5d7d75cbb9

SuccessfulCreate

Created pod: route-controller-manager-5d7d75cbb9-lf8cw

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b"

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-k7dp2

Created

Created container: cluster-image-registry-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-k7dp2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7" in 14.021s (14.021s including waiting). Image size: 548751793 bytes.

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-dcbvq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" in 14.345s (14.345s including waiting). Image size: 517997625 bytes.

openshift-oauth-apiserver

multus

apiserver-74444d8fbc-g7z4w

AddedInterface

Add eth0 [10.128.0.38/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-74444d8fbc-g7z4w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9"

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-dcbvq

Created

Created container: cluster-version-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-k7dp2

Started

Started container cluster-image-registry-operator

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-dcbvq

Started

Started container cluster-version-operator

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_fee7b9c8-62a1-49bc-bdec-0b23aa27b041 became leader

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" in 13.802s (13.802s including waiting). Image size: 677929075 bytes.

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Created

Created container: cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Started

Started container cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda" in 13.778s (13.778s including waiting). Image size: 468263999 bytes.

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

Created

Created container: dns-operator

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

Started

Started container dns-operator

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-9vjl9_61ab76c0-46d7-463c-bfa1-632700c4945a

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-9vjl9_61ab76c0-46d7-463c-bfa1-632700c4945a became leader

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.40/23] from ovn-kubernetes

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.37/23] from ovn-kubernetes

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0" in 13.906s (13.906s including waiting). Image size: 511226810 bytes.

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-apiserver

multus

apiserver-85cb8cb9bb-bmx44

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-9vjl9_61ab76c0-46d7-463c-bfa1-632700c4945a

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-9vjl9_61ab76c0-46d7-463c-bfa1-632700c4945a became leader

openshift-controller-manager

kubelet

controller-manager-8597858f97-kb2l8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5"

openshift-controller-manager

multus

controller-manager-8597858f97-kb2l8

AddedInterface

Add eth0 [10.128.0.35/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" in 13.802s (13.802s including waiting). Image size: 677929075 bytes.

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Started

Started container cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-9vjl9

Created

Created container: cluster-node-tuning-operator

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Started

Started container kube-rbac-proxy

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-cluster-node-tuning-operator

kubelet

tuned-67jx5

Created

Created container: tuned

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

kubelet

tuned-67jx5

Started

Started container tuned

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-67jx5

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

Started

Started container kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-589895fbb7-gmvnl

Created

Created container: kube-rbac-proxy

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-86d6d77c7c-k7dp2_8e851888-dd7b-4110-8bfa-52f150bc3d9b became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-jfjzg

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-dns

kubelet

dns-default-jfjzg

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

openshift-route-controller-manager

kubelet

route-controller-manager-5d7d75cbb9-lf8cw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06"

openshift-route-controller-manager

multus

route-controller-manager-5d7d75cbb9-lf8cw

AddedInterface

Add eth0 [10.128.0.41/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Created

Created container: kube-rbac-proxy

openshift-cluster-node-tuning-operator

kubelet

tuned-67jx5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" already present on machine

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-cluster-node-tuning-operator

kubelet

tuned-67jx5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" already present on machine

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

openshift-cluster-node-tuning-operator

kubelet

tuned-67jx5

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-67jx5

Started

Started container tuned

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-67jx5

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

multus

olm-operator-d64cfc9db-8qtmf

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Created

Created container: kube-rbac-proxy

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-dns

kubelet

node-resolver-l9pkr

Created

Created container: dns-node-resolver

openshift-monitoring

multus

cluster-monitoring-operator-674cbfbd9d-cxs8s

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Started

Started container kube-rbac-proxy

openshift-dns

kubelet

node-resolver-l9pkr

Started

Started container dns-node-resolver

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8"

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9"

openshift-multus

multus

multus-admission-controller-8d675b596-jgdmb

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-monitoring

multus

cluster-monitoring-operator-674cbfbd9d-cxs8s

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-marketplace

multus

marketplace-operator-64bf9778cb-mgb5v

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

multus

multus-admission-controller-8d675b596-jgdmb

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-l9pkr

openshift-dns

kubelet

node-resolver-l9pkr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" already present on machine

openshift-operator-lifecycle-manager

multus

package-server-manager-854648ff6d-phgxj

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-multus

multus

network-metrics-daemon-krv7c

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-krv7c

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626"

openshift-dns

multus

dns-default-jfjzg

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-jfjzg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955"

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-79f8cd6fdd to 1

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-8qtmf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8"

openshift-ingress

replicaset-controller

router-default-79f8cd6fdd

SuccessfulCreate

Created pod: router-default-79f8cd6fdd-r6nkv

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-operator-lifecycle-manager

multus

catalog-operator-7d9c49f57b-8jr6f

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-8jr6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8"

openshift-multus

kubelet

network-metrics-daemon-krv7c

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626"

openshift-multus

multus

network-metrics-daemon-krv7c

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e"

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing
(x63)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-6598bfb6c4 to 1

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-6598bfb6c4

SuccessfulCreate

Created pod: operator-controller-controller-manager-6598bfb6c4-7nhvs
(x48)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-7f8b8b6f4c to 1

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-7f8b8b6f4c to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-catalogd

replicaset-controller

catalogd-controller-manager-7f8b8b6f4c

SuccessfulCreate

Created pod: catalogd-controller-manager-7f8b8b6f4c-w2q2q

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-catalogd

replicaset-controller

catalogd-controller-manager-7f8b8b6f4c

SuccessfulCreate

Created pod: catalogd-controller-manager-7f8b8b6f4c-w2q2q

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found
(x5)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap
(x2)

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

(combined from similar events): Scaled up replica set controller-manager-5b4bdf67b6 to 1 from 0

openshift-controller-manager

replicaset-controller

controller-manager-6b549b45d9

SuccessfulDelete

Deleted pod: controller-manager-6b549b45d9-fhqdk

openshift-route-controller-manager

replicaset-controller

route-controller-manager-544c885f6d

SuccessfulCreate

Created pod: route-controller-manager-544c885f6d-dr4gh

openshift-controller-manager

replicaset-controller

controller-manager-5b4bdf67b6

SuccessfulCreate

Created pod: controller-manager-5b4bdf67b6-8rdjs

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 5 triggered by "required configmap/serviceaccount-ca has changed"

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-544c885f6d to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-5d7d75cbb9 to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5d7d75cbb9

SuccessfulDelete

Deleted pod: route-controller-manager-5d7d75cbb9-lf8cw

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-8qtmf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" in 13.459s (13.459s including waiting). Image size: 862633255 bytes.

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914" in 13.196s (13.196s including waiting). Image size: 458126424 bytes.

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b" in 14.917s (14.917s including waiting). Image size: 589379637 bytes.

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" in 13.136s (13.136s including waiting). Image size: 456575686 bytes.

openshift-multus

kubelet

network-metrics-daemon-krv7c

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626" in 13.217s (13.217s including waiting). Image size: 448828105 bytes.

openshift-dns

kubelet

dns-default-jfjzg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955" in 12.859s (12.859s including waiting). Image size: 484175664 bytes.

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" in 13.136s (13.136s including waiting). Image size: 456575686 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-5d7d75cbb9-lf8cw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06" in 14.266s (14.266s including waiting). Image size: 487090672 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-5d7d75cbb9-lf8cw

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-5d7d75cbb9-lf8cw

Started

Started container route-controller-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" in 13.36s (13.36s including waiting). Image size: 862633255 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e" in 13.327s (13.327s including waiting). Image size: 484450382 bytes.

openshift-controller-manager

kubelet

controller-manager-8597858f97-kb2l8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5" in 15.107s (15.107s including waiting). Image size: 558210153 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-multus

kubelet

network-metrics-daemon-krv7c

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626" in 13.217s (13.217s including waiting). Image size: 448828105 bytes.

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-745944c6b7 to 0 from 1

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-8jr6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" in 13.465s (13.465s including waiting). Image size: 862633255 bytes.

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

SuccessfulDelete

Deleted pod: cluster-version-operator-745944c6b7-dcbvq

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-dcbvq

Killing

Stopping container cluster-version-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e" in 13.327s (13.327s including waiting). Image size: 484450382 bytes.

openshift-oauth-apiserver

kubelet

apiserver-74444d8fbc-g7z4w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9" in 14.764s (14.764s including waiting). Image size: 505344964 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-multus

kubelet

network-metrics-daemon-krv7c

Created

Created container: network-metrics-daemon

openshift-operator-controller

multus

operator-controller-controller-manager-6598bfb6c4-7nhvs

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-8597858f97-kb2l8

Created

Created container: controller-manager

openshift-operator-controller

operator-controller-controller-manager-6598bfb6c4-7nhvs_209a775e-7fc3-4b15-9af5-c2bfbb0ea421

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-6598bfb6c4-7nhvs_209a775e-7fc3-4b15-9af5-c2bfbb0ea421 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.9:44226->172.30.0.10:53: read: connection refused" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.9:37512->172.30.0.10:53: read: connection refused" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.9:44226->172.30.0.10:53: read: connection refused"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.9:37512->172.30.0.10:53: read: connection refused"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-8597858f97-kb2l8 became leader

openshift-oauth-apiserver

kubelet

apiserver-74444d8fbc-g7z4w

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-74444d8fbc-g7z4w

Created

Created container: fix-audit-permissions

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Created

Created container: kube-rbac-proxy

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Started

Started container marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Created

Created container: marketplace-operator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-7f8b8b6f4c-w2q2q

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-jfjzg

Created

Created container: dns

openshift-dns

kubelet

dns-default-jfjzg

Started

Started container dns

openshift-dns

kubelet

dns-default-jfjzg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Created

Created container: cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Started

Started container cluster-monitoring-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-hql6g" is created for OpenShiftMonitoringClientCertRequester

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-dj92h" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Started

Started container kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Created

Created container: cluster-monitoring-operator

openshift-catalogd

multus

catalogd-controller-manager-7f8b8b6f4c-w2q2q

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-krv7c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

network-metrics-daemon-krv7c

Started

Started container network-metrics-daemon

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-8464df8497

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-8464df8497-st8tx

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Created

Created container: fix-audit-permissions

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-cxs8s

Started

Started container cluster-monitoring-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-8464df8497

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-8464df8497-st8tx

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-8464df8497 to 1

openshift-kube-scheduler

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-8464df8497 to 1

openshift-kube-scheduler

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-multus

kubelet

network-metrics-daemon-krv7c

Created

Created container: network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-krv7c

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-krv7c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-8jr6f

Created

Created container: catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-8jr6f

Started

Started container catalog-operator

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-dj92h" has been approved

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-hql6g" has been approved

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Started

Started container package-server-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Created

Created container: package-server-manager

openshift-controller-manager

kubelet

controller-manager-8597858f97-kb2l8

Started

Started container controller-manager

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Started

Started container multus-admission-controller

openshift-route-controller-manager

kubelet

route-controller-manager-5d7d75cbb9-lf8cw

Unhealthy

Readiness probe failed: Get "https://10.128.0.41:8443/healthz": read tcp 10.128.0.2:59716->10.128.0.41:8443: read: connection reset by peer

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-5d7d75cbb9-lf8cw

Killing

Stopping container route-controller-manager

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_fee7b9c8-62a1-49bc-bdec-0b23aa27b041 stopped leading

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-8qtmf

Started

Started container olm-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-8qtmf

Created

Created container: olm-operator

openshift-route-controller-manager

kubelet

route-controller-manager-5d7d75cbb9-lf8cw

ProbeError

Readiness probe error: Get "https://10.128.0.41:8443/healthz": read tcp 10.128.0.2:59716->10.128.0.41:8443: read: connection reset by peer body:

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-dj92h" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-hql6g" is created for OpenShiftMonitoringClientCertRequester

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Created

Created container: openshift-apiserver-check-endpoints

openshift-controller-manager

kubelet

controller-manager-8597858f97-kb2l8

Killing

Stopping container controller-manager

openshift-kube-scheduler

kubelet

installer-3-master-0

Started

Started container installer

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-krv7c

Started

Started container kube-rbac-proxy

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-8c9c967c7 to 1

openshift-kube-scheduler

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-multus

kubelet

network-metrics-daemon-krv7c

Created

Created container: kube-rbac-proxy

openshift-cluster-version

replicaset-controller

cluster-version-operator-8c9c967c7

SuccessfulCreate

Created pod: cluster-version-operator-8c9c967c7-vm7rj

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

package-server-manager-854648ff6d-phgxj_74ff0e5f-8dc2-4c64-aecf-54ed85985b35

packageserver-controller-lock

LeaderElection

package-server-manager-854648ff6d-phgxj_74ff0e5f-8dc2-4c64-aecf-54ed85985b35 became leader

openshift-dns

kubelet

dns-default-jfjzg

Started

Started container kube-rbac-proxy

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b" already present on machine

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Started

Started container openshift-apiserver

openshift-multus

kubelet

network-metrics-daemon-krv7c

Created

Created container: kube-rbac-proxy

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-scheduler

kubelet

installer-3-master-0

Created

Created container: installer

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Created

Created container: kube-rbac-proxy

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Started

Started container openshift-apiserver-check-endpoints

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Created

Created container: kube-rbac-proxy

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-dns

kubelet

dns-default-jfjzg

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-krv7c

Started

Started container kube-rbac-proxy

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager: cause by changes in data.pod.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q_611baec8-0676-407c-bd55-83350e6079ba

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-w2q2q_611baec8-0676-407c-bd55-83350e6079ba became leader

openshift-oauth-apiserver

kubelet

apiserver-74444d8fbc-g7z4w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-74444d8fbc-g7z4w

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-74444d8fbc-g7z4w

Started

Started container oauth-apiserver

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Started

Started container kube-rbac-proxy

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q_611baec8-0676-407c-bd55-83350e6079ba

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-w2q2q_611baec8-0676-407c-bd55-83350e6079ba became leader

openshift-route-controller-manager

kubelet

route-controller-manager-544c885f6d-dr4gh

Created

Created container: route-controller-manager

openshift-marketplace

kubelet

certified-operators-lqc4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-kube-scheduler

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

community-operators-ms5vp

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-ms5vp

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-ms5vp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

multus

community-operators-ms5vp

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-vm7rj

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" already present on machine

openshift-kube-scheduler

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-lqc4n

Started

Started container extract-utilities

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-vm7rj

Created

Created container: cluster-version-operator

openshift-route-controller-manager

kubelet

route-controller-manager-544c885f6d-dr4gh

Started

Started container route-controller-manager

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-vm7rj

Started

Started container cluster-version-operator

openshift-marketplace

kubelet

certified-operators-lqc4n

Created

Created container: extract-utilities

openshift-controller-manager

multus

controller-manager-5b4bdf67b6-8rdjs

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-marketplace

multus

certified-operators-lqc4n

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-544c885f6d-dr4gh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-544c885f6d-dr4gh

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_9e94b5cf-18dc-4f43-b6e3-44e476f54660 became leader

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-544c885f6d-dr4gh_d2adc62d-c771-4924-96e4-4cd7a3bbfd2e became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5b4bdf67b6-8rdjs became leader
(x10)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NoOperatorGroup

csv in namespace with no operatorgroups

openshift-marketplace

kubelet

certified-operators-lqc4n

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-ms5vp

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

multus

redhat-marketplace-4r9ht

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-4-master-0

Started

Started container installer

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

multus

redhat-operators-mr22p

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 5 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-marketplace

kubelet

redhat-operators-mr22p

Created

Created container: extract-utilities

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed"

openshift-marketplace

kubelet

redhat-operators-mr22p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

redhat-operators-mr22p

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-mr22p

Started

Started container extract-utilities

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-etcd

kubelet

etcd-master-0-master-0

Killing

Stopping container etcdctl

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-controller-manager

kubelet

installer-1-master-0

Killing

Stopping container installer
(x3)

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"
(x26)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-etcd

static-pod-installer

installer-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 1

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 5"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.34"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-4-master-0

Killing

Stopping container installer

openshift-marketplace

kubelet

community-operators-ms5vp

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-mr22p

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-mr22p

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-lqc4n

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-lqc4n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

community-operators-ms5vp

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-ms5vp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

certified-operators-lqc4n

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-mr22p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 3.049s (3.049s including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

community-operators-ms5vp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 3.038s (3.039s including waiting). Image size: 918278686 bytes.

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-marketplace

kubelet

certified-operators-lqc4n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 3.048s (3.048s including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

certified-operators-lqc4n

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Started

Started container registry-server

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-marketplace

kubelet

community-operators-ms5vp

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-ms5vp

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-lqc4n

Started

Started container registry-server

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-mr22p

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-mr22p

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-mr22p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 2.184s (2.184s including waiting). Image size: 918278686 bytes.

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Liveness probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused

openshift-marketplace

kubelet

certified-operators-lqc4n

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

community-operators-ms5vp

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine
(x3)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Unhealthy

Liveness probe failed: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Killing

Container openshift-config-operator failed liveness probe, will be restarted
(x3)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

ProbeError

Liveness probe error: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused body:

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-marketplace

kubelet

redhat-operators-mr22p

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine
(x5)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

Unhealthy

Readiness probe failed: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars
(x6)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-vnl28

ProbeError

Readiness probe error: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused body:
(x7)

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x5)

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted

openshift-kube-scheduler

kubelet

installer-5-master-0

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-5-master-0_openshift-kube-scheduler_21dd42b1-2628-4a24-97e7-6759888ed316_0(1b24d26e9924406ab705c5b22ab8aabe5652dc45b1686bf53f21c2d4d1ba3adf): error adding pod openshift-kube-scheduler_installer-5-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1b24d26e9924406ab705c5b22ab8aabe5652dc45b1686bf53f21c2d4d1ba3adf" Netns:"/var/run/netns/e8993663-ed4b-4910-bff9-50187d15a2a8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-5-master-0;K8S_POD_INFRA_CONTAINER_ID=1b24d26e9924406ab705c5b22ab8aabe5652dc45b1686bf53f21c2d4d1ba3adf;K8S_POD_UID=21dd42b1-2628-4a24-97e7-6759888ed316" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-5-master-0] networking: Multus: [openshift-kube-scheduler/installer-5-master-0/21dd42b1-2628-4a24-97e7-6759888ed316]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-5-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-5-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Unhealthy

Readiness probe failed: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

ProbeError

Readiness probe error: Get "http://10.128.0.45:8081/readyz": dial tcp 10.128.0.45:8081: connect: connection refused body:

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

ProbeError

Readiness probe error: Get "http://10.128.0.45:8081/readyz": dial tcp 10.128.0.45:8081: connect: connection refused body:

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Unhealthy

Readiness probe failed: Get "http://10.128.0.45:8081/readyz": dial tcp 10.128.0.45:8081: connect: connection refused

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Unhealthy

Readiness probe failed: Get "http://10.128.0.45:8081/readyz": dial tcp 10.128.0.45:8081: connect: connection refused

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

ProbeError

Readiness probe error: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused body:
(x4)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Unhealthy

Readiness probe failed: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused
(x3)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

ProbeError

Liveness probe error: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused body:

openshift-kube-scheduler

kubelet

installer-5-master-0

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-5-master-0_openshift-kube-scheduler_21dd42b1-2628-4a24-97e7-6759888ed316_0(a17c4d8c7eb07aa5bdf2596382750aacc385edeceaae39266656d3bbbb603224): error adding pod openshift-kube-scheduler_installer-5-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a17c4d8c7eb07aa5bdf2596382750aacc385edeceaae39266656d3bbbb603224" Netns:"/var/run/netns/d5c17055-bd7a-44c3-91f8-32894633452e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-5-master-0;K8S_POD_INFRA_CONTAINER_ID=a17c4d8c7eb07aa5bdf2596382750aacc385edeceaae39266656d3bbbb603224;K8S_POD_UID=21dd42b1-2628-4a24-97e7-6759888ed316" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-5-master-0] networking: Multus: [openshift-kube-scheduler/installer-5-master-0/21dd42b1-2628-4a24-97e7-6759888ed316]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-5-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-5-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x3)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Unhealthy

Liveness probe failed: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused
(x4)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

ProbeError

Readiness probe error: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused body:
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-m7549

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-st7mk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9" already present on machine

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914" already present on machine
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine
(x3)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-49hzm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b" already present on machine

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-pfdrx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-7gtw2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-p8hlq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" already present on machine

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-rj9cl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab" already present on machine
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc" already present on machine
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-bh886

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine
(x3)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-49hzm

Started

Started container openshift-controller-manager-operator
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Started

Started container manager
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-p8hlq

Started

Started container service-ca-operator

openshift-network-node-identity

kubelet

network-node-identity-m7549

Created

Created container: approver

openshift-network-node-identity

kubelet

network-node-identity-m7549

Started

Started container approver
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-bh886

Created

Created container: kube-scheduler-operator-container
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-bh886

Started

Started container kube-scheduler-operator-container
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Created

Created container: manager
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-p8hlq

Created

Created container: service-ca-operator
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Started

Started container manager
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Created

Created container: manager

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-7gtw2

Started

Started container kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-7gtw2

Created

Created container: kube-apiserver-operator
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-st7mk

Started

Started container kube-storage-version-migrator-operator
(x3)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-49hzm

Created

Created container: openshift-controller-manager-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-pfdrx

Created

Created container: kube-controller-manager-operator
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-st7mk

Created

Created container: kube-storage-version-migrator-operator
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Created

Created container: manager
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Started

Started container manager
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-pfdrx

Started

Started container kube-controller-manager-operator
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-rj9cl

Created

Created container: openshift-apiserver-operator
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-rj9cl

Started

Started container openshift-apiserver-operator

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from False to True ("CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"")
(x2)

openshift-network-operator

kubelet

network-operator-7c649bf6d4-st2sr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-network-operator

kubelet

network-operator-7c649bf6d4-st2sr

Started

Started container network-operator
(x2)

openshift-network-operator

kubelet

network-operator-7c649bf6d4-st2sr

Created

Created container: network-operator
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x3)

openshift-kube-scheduler

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"
(x3)

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

Unhealthy

Liveness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

ProbeError

Liveness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_f325158b-2b9d-46ad-b1e7-6862a8f052bb became leader

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from True to False ("CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)")
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Unhealthy

Liveness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

ProbeError

Liveness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body:

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_7fee3a9a-e652-4eb8-a3e5-abaebff12e30 became leader

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-8f89dfddd to 1

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x6)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

ProbeError

Liveness probe error: Get "https://10.128.0.7:8443/healthz": dial tcp 10.128.0.7:8443: connect: connection refused body:
(x6)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Unhealthy

Liveness probe failed: Get "https://10.128.0.7:8443/healthz": dial tcp 10.128.0.7:8443: connect: connection refused
(x5)

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

ProbeError

Readiness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:
(x5)

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

Unhealthy

Readiness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused
(x2)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953" already present on machine

openshift-insights

replicaset-controller

insights-operator-8f89dfddd

SuccessfulCreate

Created pod: insights-operator-8f89dfddd-brq9l

openshift-kube-scheduler

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine
(x3)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Started

Started container etcd-operator

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Killing

Container authentication-operator failed liveness probe, will be restarted

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-66b55d57d-m77x2 became leader
(x3)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Created

Created container: etcd-operator
(x3)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Created

Created container: ovnkube-cluster-manager
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5" already present on machine
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Started

Started container snapshot-controller

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Started

Started container ovnkube-cluster-manager
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Created

Created container: snapshot-controller
(x3)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Created

Created container: authentication-operator

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5b4bdf67b6-8rdjs became leader
(x3)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-dkqc4

Started

Started container authentication-operator

openshift-kube-scheduler

kubelet

installer-5-master-0

Started

Started container installer
(x2)

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

Started

Started container controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

Created

Created container: controller-manager

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821"

openshift-insights

multus

insights-operator-8f89dfddd-brq9l

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-ms5vp

Killing

Stopping container registry-server

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-viewer-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-69576476f7 to 1

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-viewer-role)\nOperatorControllerStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-metrics-reader)\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-proxy-role)\nCatalogdStaticResourcesDegraded: \"catalogd/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io catalogd-leader-election-rolebinding)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-viewer-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-machine-approver

replicaset-controller

machine-approver-955fcfb87

SuccessfulCreate

Created pod: machine-approver-955fcfb87-rh4g5

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-955fcfb87 to 1

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-6686554ddc to 1

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-6686554ddc

SuccessfulCreate

Created pod: control-plane-machine-set-operator-6686554ddc-8krst

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821" in 2.936s (2.936s including waiting). Image size: 504658657 bytes.

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-559568b945 to 1

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-6686554ddc to 1

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-6686554ddc

SuccessfulCreate

Created pod: control-plane-machine-set-operator-6686554ddc-8krst

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-69576476f7 to 1

openshift-marketplace

kubelet

community-operators-6t5lg

Started

Started container extract-utilities

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-marketplace

kubelet

community-operators-6t5lg

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-6t5lg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

multus

community-operators-6t5lg

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-69576476f7

SuccessfulCreate

Created pod: cluster-autoscaler-operator-69576476f7-dpg4q

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-69576476f7

SuccessfulCreate

Created pod: cluster-autoscaler-operator-69576476f7-dpg4q

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce"

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-55d85b7b47 to 1

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-55d85b7b47

SuccessfulCreate

Created pod: cloud-credential-operator-55d85b7b47-nrb7q

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-559568b945

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-559568b945-8lgqf

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-664cb58b85 to 1

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-664cb58b85

SuccessfulCreate

Created pod: cluster-samples-operator-664cb58b85-8lf4q

openshift-marketplace

kubelet

redhat-operators-mr22p

Killing

Stopping container registry-server

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-5cdb4c5598 to 1

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Started

Started container kube-rbac-proxy

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-9c44c86f9

SuccessfulCreate

Created pod: packageserver-9c44c86f9-rplwv
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

InstallModes now support target namespaces

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-9c44c86f9 to 1

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1"

openshift-machine-api

multus

control-plane-machine-set-operator-6686554ddc-8krst

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

SuccessfulCreate

Created pod: cluster-baremetal-operator-5cdb4c5598-qldx6

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-6fbfc8dc8f to 1

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1"

openshift-machine-api

multus

control-plane-machine-set-operator-6686554ddc-8krst

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-5cdb4c5598 to 1

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf"

openshift-cluster-samples-operator

multus

cluster-samples-operator-664cb58b85-8lf4q

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

SuccessfulCreate

Created pod: cluster-baremetal-operator-5cdb4c5598-qldx6

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-s8lfn" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

multus

cluster-baremetal-operator-5cdb4c5598-qldx6

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-cloud-credential-operator

multus

cloud-credential-operator-55d85b7b47-nrb7q

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-6t5lg

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-nrb7q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-6fbfc8dc8f

SuccessfulCreate

Created pod: cluster-storage-operator-6fbfc8dc8f-sdsks

openshift-machine-api

multus

cluster-baremetal-operator-5cdb4c5598-qldx6

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-machine-api

multus

cluster-autoscaler-operator-69576476f7-dpg4q

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3"

openshift-marketplace

multus

redhat-operators-9j9zs

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3"

openshift-marketplace

kubelet

certified-operators-lqc4n

Killing

Stopping container registry-server

openshift-machine-api

multus

cluster-autoscaler-operator-69576476f7-dpg4q

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Created

Created container: kube-rbac-proxy

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-metrics-reader)\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-proxy-role)\nCatalogdStaticResourcesDegraded: \"catalogd/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io catalogd-leader-election-rolebinding)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-viewer-role)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-viewer-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-sdsks

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5"

openshift-marketplace

kubelet

redhat-operators-9j9zs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-nrb7q

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

replicaset-controller

machine-config-operator-fdb5c78b5

SuccessfulCreate

Created pod: machine-config-operator-fdb5c78b5-5nbfk

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d"

openshift-marketplace

kubelet

redhat-operators-9j9zs

Created

Created container: extract-utilities

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-fdb5c78b5 to 1

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-nrb7q

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-marketplace

kubelet

redhat-operators-9j9zs

Started

Started container extract-utilities

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d"

openshift-machine-api

replicaset-controller

machine-api-operator-84bf6db4f9

SuccessfulCreate

Created pod: machine-api-operator-84bf6db4f9-bncfj

openshift-operator-lifecycle-manager

multus

packageserver-9c44c86f9-rplwv

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-6t5lg

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.09s (1.09s including waiting). Image size: 1220167376 bytes.

openshift-marketplace

kubelet

community-operators-6t5lg

Created

Created container: extract-content

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-84bf6db4f9 to 1

openshift-cluster-storage-operator

multus

cluster-storage-operator-6fbfc8dc8f-sdsks

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-84bf6db4f9 to 1

openshift-machine-api

replicaset-controller

machine-api-operator-84bf6db4f9

SuccessfulCreate

Created pod: machine-api-operator-84bf6db4f9-bncfj

openshift-marketplace

kubelet

community-operators-6t5lg

Started

Started container extract-content

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-nrb7q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8"

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d"

openshift-marketplace

kubelet

community-operators-6t5lg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

redhat-operators-9j9zs

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Created

Created container: kube-rbac-proxy

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-viewer-role)\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-marketplace

multus

certified-operators-9nqqp

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-4r9ht

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-9nqqp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-operator-lifecycle-manager

kubelet

packageserver-9c44c86f9-rplwv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-7577d6f48-vd52m

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-7577d6f48-vd52m became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" in 6.028s (6.028s including waiting). Image size: 557426734 bytes.

openshift-machine-api

multus

machine-api-operator-84bf6db4f9-bncfj

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" in 5.208s (5.208s including waiting). Image size: 456374430 bytes.

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" in 5.717s (5.717s including waiting). Image size: 470680779 bytes.

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" in 5.208s (5.208s including waiting). Image size: 456374430 bytes.

openshift-machine-api

multus

machine-api-operator-84bf6db4f9-bncfj

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" in 5.717s (5.717s including waiting). Image size: 470680779 bytes.

openshift-machine-config-operator

multus

machine-config-operator-fdb5c78b5-5nbfk

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf" in 5.699s (5.699s including waiting). Image size: 455416776 bytes.

openshift-marketplace

kubelet

certified-operators-9nqqp

Created

Created container: extract-utilities

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-marketplace

kubelet

certified-operators-9nqqp

Started

Started container extract-utilities

openshift-operator-lifecycle-manager

kubelet

packageserver-9c44c86f9-rplwv

Started

Started container packageserver

openshift-operator-lifecycle-manager

kubelet

packageserver-9c44c86f9-rplwv

Created

Created container: packageserver

openshift-marketplace

kubelet

community-operators-6t5lg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 4.157s (4.157s including waiting). Image size: 918278686 bytes.

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-marketplace

multus

redhat-marketplace-4fjw9

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821" already present on machine

openshift-marketplace

kubelet

redhat-operators-9j9zs

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 6.868s (6.868s including waiting). Image size: 1733328350 bytes.

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-sdsks

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5" in 7.792s (7.792s including waiting). Image size: 513581866 bytes.

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf" already present on machine

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7"

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Created

Created container: kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-nrb7q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8" in 8.417s (8.417s including waiting). Image size: 880378279 bytes.

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7"

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" in 8.316s (8.316s including waiting). Image size: 470822665 bytes.

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d" in 7.465s (7.465s including waiting). Image size: 467234714 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

Created

Created container: cluster-samples-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" in 8.316s (8.316s including waiting). Image size: 470822665 bytes.

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Created

Created container: machine-config-operator

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Started

Started container machine-config-operator

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

Started

Started container cluster-samples-operator

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dpg4q_3f4ab0e9-89cb-4f24-ba1f-d53b4a16e6e9

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dpg4q_3f4ab0e9-89cb-4f24-ba1f-d53b4a16e6e9 became leader

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Created

Created container: extract-utilities

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.34} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c}]

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Created

Created container: cluster-autoscaler-operator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing
(x2)

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

Started

Started container insights-operator
(x2)

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

Created

Created container: insights-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Started

Started container cluster-autoscaler-operator

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-marketplace

kubelet

redhat-operators-9j9zs

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-9j9zs

Started

Started container extract-content

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-cloud-controller-manager-operator

master-0_1c148145-23fd-432b-98f2-0e95192cd2ea

cluster-cloud-config-sync-leader

LeaderElection

master-0_1c148145-23fd-432b-98f2-0e95192cd2ea became leader

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Created

Created container: machine-approver-controller

openshift-cloud-controller-manager-operator

master-0_1db49ee6-8766-4495-bfe8-565c1ddab778

cluster-cloud-controller-manager-leader

LeaderElection

master-0_1db49ee6-8766-4495-bfe8-565c1ddab778 became leader

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Started

Started container machine-approver-controller

openshift-cluster-machine-approver

master-0_0c068fb9-83ea-4cbd-b75c-6a3c36d19808

cluster-machine-approver-leader

LeaderElection

master-0_0c068fb9-83ea-4cbd-b75c-6a3c36d19808 became leader

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Created

Created container: cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Started

Started container cluster-autoscaler-operator

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Created

Created container: cluster-cloud-controller-manager

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dpg4q_3f4ab0e9-89cb-4f24-ba1f-d53b4a16e6e9

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dpg4q_3f4ab0e9-89cb-4f24-ba1f-d53b4a16e6e9 became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Started

Started container config-sync-controllers

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Created

Created container: cluster-baremetal-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Started

Started container cluster-baremetal-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

Created

Created container: cluster-samples-operator-watch

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Created

Created container: cluster-baremetal-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Started

Started container cluster-baremetal-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Created

Created container: baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Started

Started container baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Created

Created container: baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Started

Started container baremetal-kube-rbac-proxy

openshift-marketplace

kubelet

community-operators-6t5lg

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-6t5lg

Started

Started container registry-server

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-qldx6_b1397887-4881-4a8f-8fbd-9e7d8f9c9d98

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-qldx6_b1397887-4881-4a8f-8fbd-9e7d8f9c9d98 became leader

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-8krst_591ab073-1884-42ae-a545-c1b65c1fc13d

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-8krst_591ab073-1884-42ae-a545-c1b65c1fc13d became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Started

Started container kube-rbac-proxy

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

Started

Started container cluster-samples-operator-watch

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-nrb7q

Started

Started container cloud-credential-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Started

Started container control-plane-machine-set-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-nrb7q

Created

Created container: cloud-credential-operator

openshift-marketplace

kubelet

certified-operators-9nqqp

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-sdsks

Created

Created container: cluster-storage-operator

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-sdsks

Started

Started container cluster-storage-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Started

Started container control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Created

Created container: control-plane-machine-set-operator

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 538ms (538ms including waiting). Image size: 1229556414 bytes.

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-qldx6_b1397887-4881-4a8f-8fbd-9e7d8f9c9d98

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-qldx6_b1397887-4881-4a8f-8fbd-9e7d8f9c9d98 became leader

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Started

Started container extract-utilities

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-8krst_591ab073-1884-42ae-a545-c1b65c1fc13d

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-8krst_591ab073-1884-42ae-a545-c1b65c1fc13d became leader

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-6fbfc8dc8f-sdsks_668b26a2-d7e1-4120-97fd-fc3bff0ff452 became leader

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")

openshift-marketplace

kubelet

certified-operators-9nqqp

Started

Started container extract-content

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.34"

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform"),Upgradeable changed from Unknown to True ("All is well")

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Started

Started container extract-content

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-marketplace

kubelet

certified-operators-9nqqp

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-9j9zs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 407ms (407ms including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

certified-operators-9nqqp

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 898ms (898ms including waiting). Image size: 1272201949 bytes.

openshift-marketplace

kubelet

redhat-operators-9j9zs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-marketplace

kubelet

redhat-operators-9j9zs

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-9j9zs

Created

Created container: registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 590ms (590ms including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

certified-operators-9nqqp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Created

Created container: registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

Started

Started container machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

Created

Created container: machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-marketplace

kubelet

certified-operators-9nqqp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 751ms (751ms including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

certified-operators-9nqqp

Started

Started container registry-server

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-k7pnc

openshift-marketplace

kubelet

certified-operators-9nqqp

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-4fjw9

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Killing

Stopping container machine-approver-controller

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-rh4g5

Killing

Stopping container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-955fcfb87 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-cluster-machine-approver

replicaset-controller

machine-approver-955fcfb87

SuccessfulDelete

Deleted pod: machine-approver-955fcfb87-rh4g5

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-machine-config-operator

replicaset-controller

machine-config-controller-ff46b7bdf

SuccessfulCreate

Created pod: machine-config-controller-ff46b7bdf-z5fkp

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-ff46b7bdf to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7" in 8.348s (8.348s including waiting). Image size: 862197440 bytes.

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Created

Created container: machine-api-operator

openshift-cluster-machine-approver

replicaset-controller

machine-approver-754bdc9f9d

SuccessfulCreate

Created pod: machine-approver-754bdc9f9d-xpl2b

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.34

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Started

Started container machine-api-operator

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7" in 8.348s (8.348s including waiting). Image size: 862197440 bytes.

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

Started

Started container machine-api-operator

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-754bdc9f9d to 1

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

Started

Started container machine-config-controller

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

Created

Created container: machine-config-controller

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-machine-config-operator

multus

machine-config-controller-ff46b7bdf-z5fkp

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e"

openshift-monitoring

multus

prometheus-operator-admission-webhook-8464df8497-st8tx

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Started

Started container machine-approver-controller

openshift-network-diagnostics

kubelet

network-check-source-7c67b67d47-sctv9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032"

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-machine-approver

master-0_cdba9ff5-6ba3-4756-8991-5713d3488c94

cluster-machine-approver-leader

LeaderElection

master-0_cdba9ff5-6ba3-4756-8991-5713d3488c94 became leader

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Created

Created container: machine-approver-controller

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d" already present on machine

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e"

openshift-monitoring

multus

prometheus-operator-admission-webhook-8464df8497-st8tx

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-source-7c67b67d47-sctv9

Created

Created container: check-endpoints

openshift-network-diagnostics

kubelet

network-check-source-7c67b67d47-sctv9

Started

Started container check-endpoints

openshift-network-diagnostics

multus

network-check-source-7c67b67d47-sctv9

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Killing

Stopping container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Killing

Stopping container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-8lgqf

Killing

Stopping container kube-rbac-proxy

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-559568b945

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-559568b945-8lgqf

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-559568b945 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-3ecec48bcd13e449838cb0ccba9dbd0d successfully generated (release version: 4.18.34, controller version: d4eb710b17481f468c73d93c876a385253a863e0)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-dc2e407f54b3c5c5b3be9d1778d6c5a4 successfully generated (release version: 4.18.34, controller version: d4eb710b17481f468c73d93c876a385253a863e0)

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-wkt98

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032" in 9.401s (9.401s including waiting). Image size: 487151732 bytes.

openshift-kube-scheduler

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-7c8df9b496 to 1

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e" in 9.014s (9.014s including waiting). Image size: 444572615 bytes.

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-7c8df9b496

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e" in 9.014s (9.014s including waiting). Image size: 444572615 bytes.

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Started

Started container router

openshift-machine-config-operator

kubelet

machine-config-server-wkt98

Started

Started container machine-config-server

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Created

Created container: prometheus-operator-admission-webhook

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-machine-config-operator

kubelet

machine-config-server-wkt98

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Started

Started container prometheus-operator-admission-webhook

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-machine-config-operator

kubelet

machine-config-server-wkt98

Created

Created container: machine-config-server

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Created

Created container: router

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Created

Created container: prometheus-operator-admission-webhook

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-st8tx

Started

Started container prometheus-operator-admission-webhook

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Started

Started container kube-rbac-proxy

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_90508abb-6da2-440b-9a4d-8ee6f045eefa became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Created

Created container: config-sync-controllers

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Started

Started container config-sync-controllers

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-monitoring

replicaset-controller

prometheus-operator-5ff8674d55

SuccessfulCreate

Created pod: prometheus-operator-5ff8674d55-qxpv9

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-5ff8674d55 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-5ff8674d55 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

replicaset-controller

prometheus-operator-5ff8674d55

SuccessfulCreate

Created pod: prometheus-operator-5ff8674d55-qxpv9

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e"

openshift-monitoring

multus

prometheus-operator-5ff8674d55-qxpv9

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-monitoring

multus

prometheus-operator-5ff8674d55-qxpv9

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e"

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e" in 1.363s (1.363s including waiting). Image size: 461569069 bytes.

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e" in 1.363s (1.363s including waiting). Image size: 461569069 bytes.

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-3ecec48bcd13e449838cb0ccba9dbd0d

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-3ecec48bcd13e449838cb0ccba9dbd0d

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-bx9dn

openshift-monitoring

replicaset-controller

openshift-state-metrics-74cc79fd76

SuccessfulCreate

Created pod: openshift-state-metrics-74cc79fd76-s9b9v

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-74cc79fd76 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

replicaset-controller

kube-state-metrics-68b88f8cb5

SuccessfulCreate

Created pod: kube-state-metrics-68b88f8cb5-qjxhc

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-68b88f8cb5 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398"

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-74cc79fd76 to 1

openshift-monitoring

replicaset-controller

openshift-state-metrics-74cc79fd76

SuccessfulCreate

Created pod: openshift-state-metrics-74cc79fd76-s9b9v

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-bx9dn

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398"

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-68b88f8cb5 to 1

openshift-monitoring

replicaset-controller

kube-state-metrics-68b88f8cb5

SuccessfulCreate

Created pod: kube-state-metrics-68b88f8cb5-qjxhc

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.34} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c}]

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" in 1.103s (1.103s including waiting). Image size: 417687610 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432"

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" in 1.103s (1.103s including waiting). Image size: 417687610 bytes.

openshift-monitoring

multus

openshift-state-metrics-74cc79fd76-s9b9v

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432"

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

multus

openshift-state-metrics-74cc79fd76-s9b9v

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

multus

kube-state-metrics-68b88f8cb5-qjxhc

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7"

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Started

Started container kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

multus

kube-state-metrics-68b88f8cb5-qjxhc

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-bx9dn

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

node-exporter-bx9dn

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-bx9dn

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" already present on machine

openshift-monitoring

kubelet

node-exporter-bx9dn

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-bx9dn

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-bx9dn

Started

Started container node-exporter

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" already present on machine

openshift-monitoring

kubelet

node-exporter-bx9dn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

node-exporter-bx9dn

Created

Created container: node-exporter

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-bx9dn

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-bx9dn

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-bx9dn

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-bx9dn

Started

Started container kube-rbac-proxy
(x10)

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-bx9dn

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing
(x11)

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7" in 4.153s (4.153s including waiting). Image size: 440559528 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-ffspe3f0nbfal -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-6474759988 to 1

openshift-monitoring

replicaset-controller

metrics-server-6474759988

SuccessfulCreate

Created pod: metrics-server-6474759988-dnw4m

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7" in 4.153s (4.153s including waiting). Image size: 440559528 bytes.

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Started

Started container kube-rbac-proxy-main

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-6474759988 to 1

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

replicaset-controller

metrics-server-6474759988

SuccessfulCreate

Created pod: metrics-server-6474759988-dnw4m

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432" in 4.058s (4.058s including waiting). Image size: 431974231 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432" in 4.058s (4.058s including waiting). Image size: 431974231 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

Started

Started container openshift-state-metrics

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-ffspe3f0nbfal -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f"

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.34} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c}]

openshift-monitoring

multus

metrics-server-6474759988-dnw4m

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f"

openshift-monitoring

multus

metrics-server-6474759988-dnw4m

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f" in 2.056s (2.056s including waiting). Image size: 471430788 bytes.

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f" in 2.056s (2.056s including waiting). Image size: 471430788 bytes.

openshift-network-node-identity

master-0_c019abde-ecd0-487f-8cee-d663f93509b9

ovnkube-identity

LeaderElection

master-0_c019abde-ecd0-487f-8cee-d663f93509b9 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-3ecec48bcd13e449838cb0ccba9dbd0d

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-3ecec48bcd13e449838cb0ccba9dbd0d to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-3ecec48bcd13e449838cb0ccba9dbd0d and node has been uncordoned

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q_00f8358c-5141-4a06-b075-689526cec2c6

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-w2q2q_00f8358c-5141-4a06-b075-689526cec2c6 became leader

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q_00f8358c-5141-4a06-b075-689526cec2c6

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-w2q2q_00f8358c-5141-4a06-b075-689526cec2c6 became leader

openshift-operator-controller

operator-controller-controller-manager-6598bfb6c4-7nhvs_9091a902-1b8f-4d38-8fb2-fc753dc140af

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-6598bfb6c4-7nhvs_9091a902-1b8f-4d38-8fb2-fc753dc140af became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused

openshift-cloud-controller-manager-operator

master-0_ce1b88cd-d7fb-4bd9-8b56-ac4c0ada11e8

cluster-cloud-controller-manager-leader

LeaderElection

master-0_ce1b88cd-d7fb-4bd9-8b56-ac4c0ada11e8 became leader

openshift-cloud-controller-manager-operator

master-0_5a6af1c6-60ab-4f74-b08d-d3ad1dcb9925

cluster-cloud-config-sync-leader

LeaderElection

master-0_5a6af1c6-60ab-4f74-b08d-d3ad1dcb9925 became leader
(x3)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-677db989d6-blw5x_openshift-ingress-operator(4d0b9fbc-a1f8-4a98-99de-758734bd1a5b)
(x3)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0" already present on machine
(x4)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Created

Created container: ingress-operator
(x4)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-blw5x

Started

Started container ingress-operator

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator

openshift-kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-68bd585b-7gtw2_161e3cea-aa4f-4ebb-b73b-43d5e1e2531c became leader

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_70254dbf-19a8-442d-9ba2-d244a93a2c5b became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-85ss7

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-85ss7

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Started

Started container kube-multus-additional-cni-plugins

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0 I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0308 00:22:12.683916 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0308 00:22:12.683925 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0 F0308 00:22:56.698015 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-799b6db4d7-rj9cl_bda9d7cc-ec6c-4757-8834-f1600cec7c8f became leader

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-multus

replicaset-controller

multus-admission-controller-7769569c45

SuccessfulCreate

Created pod: multus-admission-controller-7769569c45-5n69x

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-multus

replicaset-controller

multus-admission-controller-7769569c45

SuccessfulCreate

Created pod: multus-admission-controller-7769569c45-5n69x

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7769569c45 to 1

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7769569c45 to 1

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" already present on machine

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Started

Started container multus-admission-controller

openshift-multus

multus

multus-admission-controller-7769569c45-5n69x

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" already present on machine

openshift-multus

multus

multus-admission-controller-7769569c45-5n69x

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Killing

Stopping container kube-rbac-proxy

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-8d675b596 to 0 from 1

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-8d675b596 to 0 from 1

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulDelete

Deleted pod: multus-admission-controller-8d675b596-jgdmb

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Killing

Stopping container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Killing

Stopping container multus-admission-controller

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/template.openshift.io/v1: 401"

openshift-multus

kubelet

multus-admission-controller-8d675b596-jgdmb

Killing

Stopping container kube-rbac-proxy

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulDelete

Deleted pod: multus-admission-controller-8d675b596-jgdmb

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/template.openshift.io/v1: 401" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/template.openshift.io/v1: 401"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.34"}] to [{"operator" "4.18.34"} {"openshift-apiserver" "4.18.34"}]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.34"

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

multus

installer-1-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

PodCreated

Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NeedsReinstall

apiServices not installed
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallCheckFailed

install timeout

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Created

Created container: installer
(x3)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed
(x4)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-85ss7

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7c6989d6c4-dkqc4_9ccc3965-40df-4607-a769-7adbe79cfe13 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory")

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "optional secret/webhook-authenticator has been created"

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

openshift-kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 6 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

openshift-kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-86d7cdfdfb-pfdrx_ea297c24-ef7b-4a50-86b8-d6fcc7351e23 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator

openshift-kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-7f65c457f5-st7mk_db1c0300-7116-4adc-b6ef-0c445a630b2c became leader

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreateFailed

Failed to create Secret/service-account-private-key-3 -n openshift-kube-controller-manager: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true
(x11)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.34 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_2dc51614-f6e9-41a8-82fc-4799d066fb10 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.34"}] to [{"raw-internal" "4.18.34"} {"kube-apiserver" "1.31.14"} {"operator" "4.18.34"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config-2 -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-5qffz

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing
(x12)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.34"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: ",Available changed from False to True ("All is well")

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing
(x12)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14"

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7b45f5889c to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-5fe8510kelpgf -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-6474759988 to 0 from 1

openshift-console-operator

replicaset-controller

console-operator-6c7fb6b958

SuccessfulCreate

Created pod: console-operator-6c7fb6b958-db7d8

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-6c7fb6b958 to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7b45f5889c to 1

openshift-monitoring

replicaset-controller

metrics-server-6474759988

SuccessfulDelete

Deleted pod: metrics-server-6474759988-dnw4m

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-5fe8510kelpgf -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-6474759988 to 0 from 1

openshift-monitoring

replicaset-controller

metrics-server-7b45f5889c

SuccessfulCreate

Created pod: metrics-server-7b45f5889c-z48tj

openshift-monitoring

replicaset-controller

metrics-server-6474759988

SuccessfulDelete

Deleted pod: metrics-server-6474759988-dnw4m

openshift-monitoring

replicaset-controller

metrics-server-7b45f5889c

SuccessfulCreate

Created pod: metrics-server-7b45f5889c-z48tj

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"3be405ec-b2bb-4184-b6a6-a91dbc1f4698\", ResourceVersion:\"11601\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 8, 0, 14, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 8, 0, 19, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001b89ba8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Progressing changed from False to True (""),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"3be405ec-b2bb-4184-b6a6-a91dbc1f4698\", ResourceVersion:\"11601\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 8, 0, 14, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 8, 0, 19, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001b89ba8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"
(x7)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

telemeter-client-6cfc594d97

SuccessfulCreate

Created pod: telemeter-client-6cfc594d97-x62fk

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-6cfc594d97 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

telemeter-client-6cfc594d97

SuccessfulCreate

Created pod: telemeter-client-6cfc594d97-x62fk

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-6cfc594d97 to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"service-account-private-key-3\" already exists\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-monitoring

replicaset-controller

monitoring-plugin-6db79546f6

SuccessfulCreate

Created pod: monitoring-plugin-6db79546f6-gdz4k

openshift-monitoring

replicaset-controller

monitoring-plugin-6db79546f6

SuccessfulCreate

Created pod: monitoring-plugin-6db79546f6-gdz4k
(x15)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"service-account-private-key-3\" already exists\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"service-account-private-key-3\" already exists\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-3,service-account-private-key-3\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

RequiredInstallerResourcesMissing

secrets: localhost-recovery-client-token-3,service-account-private-key-3
(x5)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreateFailed

Failed to create Secret/service-account-private-key-3 -n openshift-kube-controller-manager: secrets "service-account-private-key-3" already exists

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: " to "All is well"

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-6db79546f6 to 1

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-6db79546f6 to 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"3be405ec-b2bb-4184-b6a6-a91dbc1f4698\", ResourceVersion:\"11601\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 8, 0, 14, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 8, 0, 19, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001b89ba8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"3be405ec-b2bb-4184-b6a6-a91dbc1f4698\", ResourceVersion:\"11601\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 8, 0, 14, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 8, 0, 19, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001b89ba8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"
(x4)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapUpdateFailed

Failed to update ConfigMap/config-2 -n openshift-kube-apiserver: Operation cannot be fulfilled on configmaps "config-2": the object has been modified; please apply your changes to the latest version and try again
(x10)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed"

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-wkt98

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-wkt98

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "image-import-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "etcd-serving-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "audit" : failed to sync configmap cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "encryption-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-bx9dn

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

FailedMount

MountVolume.SetUp failed for volume "catalogserver-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

FailedMount

MountVolume.SetUp failed for volume "catalogserver-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

FailedMount

MountVolume.SetUp failed for volume "ca-certs" : failed to sync configmap cache: timed out waiting for the condition

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-bc2m2

FailedMount

MountVolume.SetUp failed for volume "signing-key" : failed to sync secret cache: timed out waiting for the condition

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-bc2m2

FailedMount

MountVolume.SetUp failed for volume "signing-cabundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-9c44c86f9-rplwv

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-bx9dn

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

FailedMount

MountVolume.SetUp failed for volume "ca-certs" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-9c44c86f9-rplwv

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Killing

Stopping container metrics-server

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

FailedMount

MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-k7pnc

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on configmaps \"config-2\": the object has been modified; please apply your changes to the latest version and try again\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: "

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-z5fkp

FailedMount

MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-s9b9v

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

Killing

Stopping container metrics-server

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-8lf4q

FailedMount

MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-8f89dfddd-brq9l

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-bncfj

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-apiserver

kubelet

apiserver-85cb8cb9bb-bmx44

FailedMount

MountVolume.SetUp failed for volume "etcd-client" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "federate-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-console-operator

kubelet

console-operator-6c7fb6b958-db7d8

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-console-operator

kubelet

console-operator-6c7fb6b958-db7d8

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-console-operator

kubelet

console-operator-6c7fb6b958-db7d8

FailedMount

MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-bx9dn

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-bx9dn

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"service-account-private-key-3\" already exists\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-3,service-account-private-key-3\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on configmaps \"revision-status-3\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-3,service-account-private-key-3\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused"
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-bx9dn

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-7769569c45-5n69x

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-bx9dn

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-ingress-canary

kubelet

ingress-canary-5qffz

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-qxpv9

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "federate-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-qjxhc

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on configmaps \"config-2\": the object has been modified; please apply your changes to the latest version and try again\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: "
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "telemeter-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "telemeter-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

FailedMount

MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

FailedMount

MountVolume.SetUp failed for volume "monitoring-plugin-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on configmaps \"revision-status-3\": the object has been modified; please apply your changes to the latest version and try again\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on configmaps \"revision-status-3\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-3,service-account-private-key-3\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on configmaps \"revision-status-3\": the object has been modified; please apply your changes to the latest version and try again\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused"
(x2)

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

FailedMount

MountVolume.SetUp failed for volume "monitoring-plugin-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

ProbeError

Liveness probe error: Get "http://localhost:1936/healthz": dial tcp [::1]:1936: connect: connection refused body:

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-monitoring

multus

monitoring-plugin-6db79546f6-gdz4k

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/controller-manager-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/controller-manager-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Created

Created container: router

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

ProbeError

Readiness probe error: Get "http://localhost:1936/healthz/ready": dial tcp [::1]:1936: connect: connection refused body:

openshift-monitoring

multus

monitoring-plugin-6db79546f6-gdz4k

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Unhealthy

Liveness probe failed: Get "http://localhost:1936/healthz": dial tcp [::1]:1936: connect: connection refused

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Started

Started container router

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191"

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032" already present on machine

openshift-ingress

kubelet

router-default-79f8cd6fdd-r6nkv

Unhealthy

Readiness probe failed: Get "http://localhost:1936/healthz/ready": dial tcp [::1]:1936: connect: connection refused

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

multus

telemeter-client-6cfc594d97-x62fk

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e"

openshift-ingress-canary

multus

ingress-canary-5qffz

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

multus

metrics-server-7b45f5889c-z48tj

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes
(x2)

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

multus

metrics-server-7b45f5889c-z48tj

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:22:12.648931 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683855 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:22:12.683916 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.683925 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:22:12.695541 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:22:42.696683 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:22:56.698015 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-console-operator

multus

console-operator-6c7fb6b958-db7d8

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-console-operator

kubelet

console-operator-6c7fb6b958-db7d8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

multus

telemeter-client-6cfc594d97-x62fk

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-ingress-canary

kubelet

ingress-canary-5qffz

Created

Created container: serve-healthcheck-canary

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-5cd89459d5 to 1

openshift-monitoring

replicaset-controller

thanos-querier-5cd89459d5

SuccessfulCreate

Created pod: thanos-querier-5cd89459d5-wwnjs

openshift-kube-controller-manager

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-5cd89459d5 to 1

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-85cb8cb9bb-bmx44 pod)",Available changed from True to False ("APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.")

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-kube-controller-manager

static-pod-installer

installer-2-master-0

StaticPodInstallerFailed

Installing revision 2: client rate limiter Wait returned an error: context canceled

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f"

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

Created

Created container: metrics-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

Started

Started container metrics-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f" already present on machine

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-7b45f5889c-z48tj

Started

Started container metrics-server

openshift-monitoring

replicaset-controller

thanos-querier-5cd89459d5

SuccessfulCreate

Created pod: thanos-querier-5cd89459d5-wwnjs

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-2m7s0hn4nptd -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f"

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-ingress-canary

kubelet

ingress-canary-5qffz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-ingress-canary

kubelet

ingress-canary-5qffz

Started

Started container serve-healthcheck-canary

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-2m7s0hn4nptd -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88"

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191" in 2.653s (2.653s including waiting). Image size: 447810376 bytes.

openshift-kube-controller-manager

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine
(x9)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

openshift-kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node." to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\""

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Created

Created container: monitoring-plugin

openshift-kube-controller-manager

kubelet

installer-3-master-0

Created

Created container: installer

openshift-monitoring

multus

thanos-querier-5cd89459d5-wwnjs

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88"

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-6db79546f6-gdz4k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191" in 2.653s (2.653s including waiting). Image size: 447810376 bytes.

openshift-monitoring

multus

thanos-querier-5cd89459d5-wwnjs

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-3-master-0

Started

Started container installer

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

openshift-kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-48mqvdnajl6js -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\""

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-48mqvdnajl6js -n openshift-monitoring because it was missing
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from False to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" in 4.027s (4.027s including waiting). Image size: 437909442 bytes.

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Created

Created container: reload

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e" in 4.685s (4.685s including waiting). Image size: 480534195 bytes.

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Started

Started container reload

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" in 4.027s (4.027s including waiting). Image size: 437909442 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Started

Started container telemeter-client

openshift-console-operator

kubelet

console-operator-6c7fb6b958-db7d8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb" in 4.397s (4.397s including waiting). Image size: 512235767 bytes.

openshift-console-operator

kubelet

console-operator-6c7fb6b958-db7d8

Created

Created container: console-operator

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e" in 4.685s (4.685s including waiting). Image size: 480534195 bytes.

openshift-monitoring

kubelet

telemeter-client-6cfc594d97-x62fk

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-console-operator

kubelet

console-operator-6c7fb6b958-db7d8

Started

Started container console-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

openshift-kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-64488f9d78-vnl28_b5bd6fdd-00b6-406c-8635-ac169359b1ee became leader

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console",Upgradeable changed from Unknown to False ("DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console")

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-console

replicaset-controller

downloads-84f57b9877

SuccessfulCreate

Created pod: downloads-84f57b9877-8g27w

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.34"
(x2)

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-84f57b9877 to 1

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-6c7fb6b958-db7d8_0f06436a-0098-4e90-b40d-a6c5d8958d43 became leader

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963"
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-85cb8cb9bb-bmx44 pod)" to "All is well",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\""

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63"

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" in 4.39s (4.39s including waiting). Image size: 502712961 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "optional configmap/oauth-metadata has been created"

openshift-console

multus

downloads-84f57b9877-8g27w

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" in 4.39s (4.39s including waiting). Image size: 502712961 bytes.

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container thanos-query

openshift-console

kubelet

downloads-84f57b9877-8g27w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: "

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" to "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy-web

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x3)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" in 2.501s (2.501s including waiting). Image size: 467539377 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" in 2.501s (2.501s including waiting). Image size: 467539377 bytes.

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6df5fc69d to 1

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container prom-label-proxy

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: prom-label-proxy

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\n- \t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n \t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n \t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n \t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n \t},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" in 1.999s (1.999s including waiting). Image size: 413103557 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-authentication

replicaset-controller

oauth-openshift-6df5fc69d

SuccessfulCreate

Created pod: oauth-openshift-6df5fc69d-thc6n

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-console

replicaset-controller

console-5c84b9c874

SuccessfulCreate

Created pod: console-5c84b9c874-8xl2l

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" in 1.999s (1.999s including waiting). Image size: 413103557 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5c84b9c874 to 1

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found"

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy-rules

openshift-authentication

kubelet

oauth-openshift-6df5fc69d-thc6n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-authentication

multus

oauth-openshift-6df5fc69d-thc6n

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

openshift-kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-console

multus

console-5c84b9c874-8xl2l

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-5cd89459d5-wwnjs

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" already present on machine

openshift-console

kubelet

console-5c84b9c874-8xl2l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8"

openshift-authentication

kubelet

oauth-openshift-6df5fc69d-thc6n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f" in 2.225s (2.225s including waiting). Image size: 481454434 bytes.

openshift-authentication

kubelet

oauth-openshift-6df5fc69d-thc6n

Created

Created container: oauth-openshift

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-authentication

kubelet

oauth-openshift-6df5fc69d-thc6n

Started

Started container oauth-openshift

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

openshift-kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5c74bfc494-bh886_69b745ce-ef25-4b70-861a-d6c7e7cecaff became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console"

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console

kubelet

console-5c84b9c874-8xl2l

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" in 5.317s (5.317s including waiting). Image size: 633876767 bytes.

openshift-console

kubelet

console-5c84b9c874-8xl2l

Created

Created container: console

openshift-console

kubelet

console-5c84b9c874-8xl2l

Started

Started container console

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.34"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14"

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0"

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.34"}] to [{"raw-internal" "4.18.34"} {"kube-scheduler" "1.31.14"} {"operator" "4.18.34"}]

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapUpdated

Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "All is well"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 5 because static pod is ready

openshift-route-controller-manager

kubelet

route-controller-manager-544c885f6d-dr4gh

Killing

Stopping container route-controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-5d647dccbb to 1 from 0

openshift-route-controller-manager

replicaset-controller

route-controller-manager-544c885f6d

SuccessfulDelete

Deleted pod: route-controller-manager-544c885f6d-dr4gh

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-544c885f6d to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5d647dccbb

SuccessfulCreate

Created pod: route-controller-manager-5d647dccbb-6cz8b

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e95c47e9d"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ + "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ + "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52d35a623b"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.34"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5ddc94864c to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5b4bdf67b6 to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-8565d84698-49hzm_0adbd0fe-8323-4cb2-98e3-2a76a86cba18 became leader

openshift-controller-manager

replicaset-controller

controller-manager-5ddc94864c

SuccessfulCreate

Created pod: controller-manager-5ddc94864c-7nwdc

openshift-controller-manager

replicaset-controller

controller-manager-5b4bdf67b6

SuccessfulDelete

Deleted pod: controller-manager-5b4bdf67b6-8rdjs

openshift-controller-manager

kubelet

controller-manager-5b4bdf67b6-8rdjs

Killing

Stopping container controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.")

openshift-authentication

replicaset-controller

oauth-openshift-69dcf9d7fd

SuccessfulCreate

Created pod: oauth-openshift-69dcf9d7fd-5tbt2

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-69dcf9d7fd to 1 from 0

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-6df5fc69d to 0 from 1

openshift-authentication

kubelet

oauth-openshift-6df5fc69d-thc6n

Killing

Stopping container oauth-openshift

openshift-authentication

replicaset-controller

oauth-openshift-6df5fc69d

SuccessfulDelete

Deleted pod: oauth-openshift-6df5fc69d-thc6n

openshift-console

replicaset-controller

console-76c777474b

SuccessfulCreate

Created pod: console-76c777474b-n9mhf

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-76c777474b to 1

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-console

multus

console-76c777474b-n9mhf

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-route-controller-manager

multus

route-controller-manager-5d647dccbb-6cz8b

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-controller-manager

multus

controller-manager-5ddc94864c-7nwdc

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" in 4.889s (4.889s including waiting). Image size: 605698200 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" in 4.889s (4.889s including waiting). Image size: 605698200 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-5d647dccbb-6cz8b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-route-controller-manager

kubelet

route-controller-manager-5d647dccbb-6cz8b

Created

Created container: route-controller-manager

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-route-controller-manager

kubelet

route-controller-manager-5d647dccbb-6cz8b

Started

Started container route-controller-manager

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-76c777474b-n9mhf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

openshift-console

kubelet

console-76c777474b-n9mhf

Created

Created container: console

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.")

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5ddc94864c-7nwdc became leader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-console

kubelet

console-76c777474b-n9mhf

Started

Started container console

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-5d647dccbb-6cz8b

ProbeError

Readiness probe error: Get "https://10.128.0.94:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-5d647dccbb-6cz8b

Unhealthy

Readiness probe failed: Get "https://10.128.0.94:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-69dcf9d7fd to 0 from 1

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

replicaset-controller

oauth-openshift-69dcf9d7fd

SuccessfulDelete

Deleted pod: oauth-openshift-69dcf9d7fd-5tbt2

openshift-authentication

replicaset-controller

oauth-openshift-5b6fc868c6

SuccessfulCreate

Created pod: oauth-openshift-5b6fc868c6-zc2fj

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-5b6fc868c6 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "optional configmap/oauth-metadata has been created"

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-5d647dccbb-6cz8b_330e4522-a522-4f73-9bf6-d919baa5837a became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/config has changed"
(x3)

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" to "All is well",Upgradeable changed from False to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Killing

Stopping container alertmanager

openshift-console

kubelet

downloads-84f57b9877-8g27w

Started

Started container download-server

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Killing

Stopping container kube-rbac-proxy

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-console

kubelet

downloads-84f57b9877-8g27w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b" in 38.742s (38.742s including waiting). Image size: 2895821940 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-monitoring

kubelet

alertmanager-main-0

Killing

Stopping container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Killing

Stopping container kube-rbac-proxy

openshift-console

kubelet

downloads-84f57b9877-8g27w

Created

Created container: download-server

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.34"

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulDelete

delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14"

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulDelete

delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-kube-controller-manager

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-authentication

multus

oauth-openshift-5b6fc868c6-zc2fj

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.34"}] to [{"raw-internal" "4.18.34"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.34"}]

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-authentication

kubelet

oauth-openshift-5b6fc868c6-zc2fj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing
(x3)

openshift-console

kubelet

downloads-84f57b9877-8g27w

Unhealthy

Readiness probe failed: Get "http://10.128.0.89:8080/": dial tcp 10.128.0.89:8080: connect: connection refused
(x3)

openshift-console

kubelet

downloads-84f57b9877-8g27w

ProbeError

Readiness probe error: Get "http://10.128.0.89:8080/": dial tcp 10.128.0.89:8080: connect: connection refused body:

openshift-authentication

kubelet

oauth-openshift-5b6fc868c6-zc2fj

Created

Created container: oauth-openshift

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-authentication

kubelet

oauth-openshift-5b6fc868c6-zc2fj

Started

Started container oauth-openshift

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/config has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_ce8ff55d-a666-42db-94bd-c7b2c29c193f became leader

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5884b9cd56-27phk_18f102ba-6c3c-46b4-82fe-90faf8d2675a became leader
(x2)

openshift-authentication

kubelet

oauth-openshift-5b6fc868c6-zc2fj

ProbeError

Readiness probe error: Get "https://10.128.0.97:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found
(x2)

openshift-authentication

kubelet

oauth-openshift-5b6fc868c6-zc2fj

Unhealthy

Readiness probe failed: Get "https://10.128.0.97:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries")

openshift-kube-apiserver

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "etcd" changed from "" to "4.18.34"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "operator" changed from "" to "4.18.34"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 3 because static pod is ready
(x5)

openshift-console

kubelet

console-5c84b9c874-8xl2l

ProbeError

Startup probe error: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused body:

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3")
(x5)

openshift-console

kubelet

console-5c84b9c874-8xl2l

Unhealthy

Startup probe failed: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused
(x4)

openshift-console

kubelet

console-76c777474b-n9mhf

Unhealthy

Startup probe failed: Get "https://10.128.0.95:8443/health": dial tcp 10.128.0.95:8443: connect: connection refused
(x4)

openshift-console

kubelet

console-76c777474b-n9mhf

ProbeError

Startup probe error: Get "https://10.128.0.95:8443/health": dial tcp 10.128.0.95:8443: connect: connection refused body:

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-69b6fc6b88-p8hlq_13627b8c-6dad-4e88-a7bf-3ee98d9b7fc4 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_65a3972f-b90a-428f-a352-38ce1c8f5bf9 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 6 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": dial tcp 172.30.242.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

StartingNewRevision

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-ttpzw

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container kube-rbac-proxy-thanos

openshift-console

kubelet

console-5c84b9c874-8xl2l

Killing

Stopping container console

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container kube-rbac-proxy

openshift-console

replicaset-controller

console-5c84b9c874

SuccessfulDelete

Deleted pod: console-5c84b9c874-8xl2l

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.34_openshift"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.34"} {"oauth-apiserver" "4.18.34"}] to [{"operator" "4.18.34"} {"oauth-apiserver" "4.18.34"} {"oauth-openshift" "4.18.34_openshift"}]

openshift-console

replicaset-controller

console-6787d8db86

SuccessfulCreate

Created pod: console-6787d8db86-xxqwp

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulDelete

delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6787d8db86 to 1 from 0

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5c84b9c874 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-image-registry

kubelet

node-ca-ttpzw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4fda3b54d00ce93f9646411aaa4d337f897e30a70da77288b7f3fdeb5a8b1a6"

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container prometheus

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulDelete

delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container kube-rbac-proxy-thanos

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 6 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-console

replicaset-controller

console-76c777474b

SuccessfulDelete

Deleted pod: console-76c777474b-n9mhf

openshift-console

kubelet

console-76c777474b-n9mhf

Killing

Stopping container console

openshift-console

replicaset-controller

console-6dc96f5b89

SuccessfulCreate

Created pod: console-6dc96f5b89-ctlsc

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-76c777474b to 0 from 1

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6dc96f5b89 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-6787d8db86-xxqwp

Started

Started container console

openshift-console

kubelet

console-6787d8db86-xxqwp

Created

Created container: console

openshift-console

kubelet

console-6787d8db86-xxqwp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

openshift-console

multus

console-6787d8db86-xxqwp

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-console

replicaset-controller

console-6479f6d896

SuccessfulCreate

Created pod: console-6479f6d896-j6kqz

openshift-console

replicaset-controller

console-6787d8db86

SuccessfulDelete

Deleted pod: console-6787d8db86-xxqwp

openshift-console

multus

console-6dc96f5b89-ctlsc

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-console

kubelet

console-6dc96f5b89-ctlsc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

openshift-console

kubelet

console-6dc96f5b89-ctlsc

Created

Created container: console

openshift-console

kubelet

console-6dc96f5b89-ctlsc

Started

Started container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6787d8db86 to 0 from 1

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6479f6d896 to 1 from 0

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.242.18:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "All is well"

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-image-registry

kubelet

node-ca-ttpzw

Started

Started container node-ca

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-image-registry

kubelet

node-ca-ttpzw

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-ttpzw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4fda3b54d00ce93f9646411aaa4d337f897e30a70da77288b7f3fdeb5a8b1a6" in 2.877s (2.877s including waiting). Image size: 481636484 bytes.
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.34, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"

openshift-console

kubelet

console-6479f6d896-j6kqz

Started

Started container console

openshift-console

kubelet

console-6479f6d896-j6kqz

Created

Created container: console

openshift-console

kubelet

console-6479f6d896-j6kqz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

openshift-console

multus

console-6479f6d896-j6kqz

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.34, 0 replicas available"

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6"

openshift-console

kubelet

console-6787d8db86-xxqwp

Killing

Stopping container console

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-6-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler

kubelet

installer-6-master-0

Started

Started container installer

openshift-kube-scheduler

multus

installer-6-master-0

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-6-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

installer-6-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

openshift-kube-apiserver-operator

RevisionTriggered

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 5"

openshift-kube-apiserver

kubelet

installer-4-master-0

Killing

Stopping container installer
(x2)

openshift-console

kubelet

console-6dc96f5b89-ctlsc

ProbeError

Startup probe error: Get "https://10.128.0.101:8443/health": dial tcp 10.128.0.101:8443: connect: connection refused body:
(x2)

openshift-console

kubelet

console-6dc96f5b89-ctlsc

Unhealthy

Startup probe failed: Get "https://10.128.0.101:8443/health": dial tcp 10.128.0.101:8443: connect: connection refused

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-69bg7

FailedMount

MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found

openshift-kube-apiserver

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-network-console

replicaset-controller

networking-console-plugin-5cbd49d755

SuccessfulCreate

Created pod: networking-console-plugin-5cbd49d755-69bg7

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-5cbd49d755 to 1

openshift-kube-apiserver

kubelet

installer-5-master-0

Started

Started container installer

openshift-console

replicaset-controller

console-c45bf598

SuccessfulCreate

Created pod: console-c45bf598-vngbg

openshift-console

kubelet

console-6dc96f5b89-ctlsc

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6dc96f5b89 to 0 from 1

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.34, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected",status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-c45bf598 to 1 from 0

openshift-console

replicaset-controller

console-6dc96f5b89

SuccessfulDelete

Deleted pod: console-6dc96f5b89-ctlsc

openshift-console

kubelet

console-c45bf598-vngbg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

openshift-console

multus

console-c45bf598-vngbg

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openshift-network-console

multus

networking-console-plugin-5cbd49d755-69bg7

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from True to False ("All is well")
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.34, 0 replicas available")

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-69bg7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba"

openshift-console

kubelet

console-c45bf598-vngbg

Created

Created container: console

openshift-console

kubelet

console-c45bf598-vngbg

Started

Started container console

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-69bg7

Created

Created container: networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-69bg7

Started

Started container networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-69bg7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba" in 1.424s (1.424s including waiting). Image size: 446924112 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl
(x7)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : secret "metrics-server-ffspe3f0nbfal" not found
(x7)

openshift-monitoring

kubelet

metrics-server-6474759988-dnw4m

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : secret "metrics-server-ffspe3f0nbfal" not found
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted

openshift-network-node-identity

kubelet

network-node-identity-m7549

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-m7549

Created

Created container: approver

openshift-network-node-identity

kubelet

network-node-identity-m7549

Started

Started container approver

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup
(x11)

openshift-console

kubelet

console-6479f6d896-j6kqz

Unhealthy

Startup probe failed: Get "https://10.128.0.103:8443/health": dial tcp 10.128.0.103:8443: connect: connection refused
(x11)

openshift-console

kubelet

console-6479f6d896-j6kqz

ProbeError

Startup probe error: Get "https://10.128.0.103:8443/health": dial tcp 10.128.0.103:8443: connect: connection refused body:

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Started

Started container marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914" already present on machine

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-mgb5v

Created

Created container: marketplace-operator
(x11)

openshift-console

kubelet

console-c45bf598-vngbg

Unhealthy

Startup probe failed: Get "https://10.128.0.109:8443/health": dial tcp 10.128.0.109:8443: connect: connection refused
(x11)

openshift-console

kubelet

console-c45bf598-vngbg

ProbeError

Startup probe error: Get "https://10.128.0.109:8443/health": dial tcp 10.128.0.109:8443: connect: connection refused body:

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Started

Started container config-sync-controllers

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Created

Created container: manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Started

Started container manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Created

Created container: manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-w2q2q

Started

Started container manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-nwttq

Created

Created container: cluster-cloud-controller-manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Started

Started container manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-7nhvs

Created

Created container: manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Started

Started container control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" already present on machine

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Started

Started container control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-8krst

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Created

Created container: machine-approver-controller

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-xpl2b

Started

Started container machine-approver-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Started

Started container ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-m77x2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-5ddc94864c-7nwdc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-5ddc94864c-7nwdc

Created

Created container: controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-5ddc94864c-7nwdc

Started

Started container controller-manager

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "All is well"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'")

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Unhealthy

Liveness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

ProbeError

Liveness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded message changed from "All is well" to "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

ProbeError

Readiness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded message changed from "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-7577d6f48-vd52m_openshift-cluster-storage-operator(e97435ee-522e-427d-9efc-40bc3d2b0d02)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

StartingNewRevision

new revision 5 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

openshift-kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1" already present on machine
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Created

Created container: snapshot-controller
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-vd52m

Started

Started container snapshot-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

openshift-kube-controller-manager-operator

RevisionTriggered

new revision 5 triggered by "required secret/service-account-private-key has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

Failed to create installer pod for revision 6 count 0 on node "master-0": the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: "
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.34 because: the server was unable to return a response in the time allotted, but may still be processing the request (get machineconfigpools.machineconfiguration.openshift.io master)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5cdb4c5598-qldx6_openshift-machine-api(84522c03-fd7b-4be7-9413-84e510b9dc5a)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5cdb4c5598-qldx6_openshift-machine-api(84522c03-fd7b-4be7-9413-84e510b9dc5a)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: "

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdMembersDegraded: No unhealthy members found"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

InstallerPodFailed

Failed to create installer pod for revision 2 count 0 on node "master-0": the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)
(x4)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

BackOff

Back-off restarting failed container cluster-policy-controller in pod kube-controller-manager-master-0_openshift-kube-controller-manager(2ab662059bb326d13a07bf5700e4f545)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

InstallerPodFailed

Failed to create installer pod for revision 4 count 0 on node "master-0": the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: "

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

ComponentUnhealthy

Webhook install failed: the server was unable to return a response in the time allotted, but may still be processing the request (get validatingwebhookconfigurations.admissionregistration.k8s.io)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: "

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_9e94b5cf-18dc-4f43-b6e3-44e476f54660 stopped leading

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-7577d6f48-vd52m

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-7577d6f48-vd52m became leader

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Created

Created container: etcd-operator

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dpg4q_1b685184-6209-42b5-a016-ce543a48b656

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dpg4q_1b685184-6209-42b5-a016-ce543a48b656 became leader

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-p8hlq

Started

Started container service-ca-operator

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-p8hlq

Created

Created container: service-ca-operator

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-sdsks

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5" already present on machine

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-sdsks

Started

Started container cluster-storage-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Created

Created container: cluster-autoscaler-operator

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-p8hlq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" already present on machine

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Created

Created container: machine-config-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Started

Started container cluster-autoscaler-operator

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dpg4q_1b685184-6209-42b5-a016-ce543a48b656

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dpg4q_1b685184-6209-42b5-a016-ce543a48b656 became leader

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-5nbfk

Started

Started container machine-config-operator

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Started

Started container etcd-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" already present on machine

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-sdsks

Created

Created container: cluster-storage-operator

openshift-operator-lifecycle-manager

package-server-manager-854648ff6d-phgxj_de8ad9d5-c426-40b5-ac12-57408acc8645

packageserver-controller-lock

LeaderElection

package-server-manager-854648ff6d-phgxj_de8ad9d5-c426-40b5-ac12-57408acc8645 became leader

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Started

Started container package-server-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Created

Created container: package-server-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-phgxj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Created

Created container: cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dpg4q

Started

Started container cluster-autoscaler-operator

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_f9c01111-be09-489c-9783-872041ba0034 became leader

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-vm7rj

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" already present on machine

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-vm7rj

Created

Created container: cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-vm7rj

Started

Started container cluster-version-operator
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Created

Created container: cluster-baremetal-operator
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Created

Created container: cluster-baremetal-operator
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" already present on machine

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Started

Started container cluster-baremetal-operator
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-qldx6

Started

Started container cluster-baremetal-operator

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

Unhealthy

Liveness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-27phk

ProbeError

Liveness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body:

openshift-console

kubelet

console-6479f6d896-j6kqz

FailedPreStopHook

PreStopHook failed

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5ddc94864c-7nwdc became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "OAuthClientsControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nOAuthClientsControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from True to False ("All is well")

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-66b55d57d-m77x2 became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nOAuthClientsControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "OAuthClientsControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.34, 0 replicas available")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)" to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" started at 2026-03-08 00:33:56 +0000 UTC is still not ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: "

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-69b6fc6b88-p8hlq_92e5e2f9-3dce-4820-9a81-26961a4e8c91 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5884b9cd56-27phk_e5554184-e8f8-43a4-88e4-7e2ddfe36b65 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdMembersDegraded: No unhealthy members found"
(x3)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-6fbfc8dc8f-sdsks_32f18a9c-3cf6-4d4f-a07e-4549de46df00 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" started at 2026-03-08 00:33:56 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(2ab662059bb326d13a07bf5700e4f545)"

openshift-etcd-operator

openshift-cluster-etcd-operator-missingstaticpodcontroller

etcd-operator

MissingStaticPod

static pod lifecycle failure - static pod: "etcd" in namespace: "openshift-etcd" for revision: 2 on node: "master-0" didn't show up, waited: 3m30s

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0308 00:33:28.736314 1 cmd.go:413] Getting controller reference for node master-0 I0308 00:33:28.746160 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0308 00:33:28.746245 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0308 00:33:28.746283 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0308 00:33:28.749010 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0308 00:33:38.755548 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0308 00:34:02.751670 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0308 00:34:22.750646 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0308 00:34:42.753781 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0308 00:34:56.754298 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0308 00:34:56.754348 1 cmd.go:109] timed out waiting for the condition

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)\nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found"

openshift-kube-controller-manager

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: "

openshift-kube-controller-manager

kubelet

installer-5-master-0

Started

Started container installer

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-editor-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:11.728460 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745412 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745526 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.745546 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.832944 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:33:41.833828 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:33:55.836422 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:11.728460 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745412 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745526 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.745546 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.832944 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:33:41.833828 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:33:55.836422 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:11.728460 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745412 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745526 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.745546 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.832944 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:33:41.833828 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:33:55.836422 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:28.736314 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746160 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746245 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.746283 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.749010 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0308 00:33:38.755548 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0308 00:34:02.751670 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:22.750646 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:42.753781 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:56.754298 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0308 00:34:56.754348 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:28.736314 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746160 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746245 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.746283 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.749010 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0308 00:33:38.755548 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0308 00:34:02.751670 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:22.750646 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:42.753781 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:56.754298 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0308 00:34:56.754348 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: "

openshift-kube-controller-manager

kubelet

installer-5-master-0

Created

Created container: installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)",Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to ""

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:28.736314 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746160 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746245 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.746283 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.749010 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0308 00:33:38.755548 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0308 00:34:02.751670 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:22.750646 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:42.753781 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:56.754298 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0308 00:34:56.754348 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)" to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)\nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: s: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0308 00:33:11.728460 1 cmd.go:413] Getting controller reference for node master-0 I0308 00:33:11.745412 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0308 00:33:11.745526 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0308 00:33:11.745546 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0308 00:33:11.832944 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0308 00:33:41.833828 1 cmd.go:524] Getting installer pods for node master-0 F0308 00:33:55.836422 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5b6fc868c6-zc2fj\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5b6fc868c6-zc2fj)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:28.736314 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746160 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746245 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.746283 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.749010 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0308 00:33:38.755548 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0308 00:34:02.751670 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:22.750646 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:42.753781 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:56.754298 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0308 00:34:56.754348 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:28.736314 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746160 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746245 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.746283 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.749010 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0308 00:33:38.755548 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0308 00:34:02.751670 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:22.750646 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:42.753781 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:56.754298 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0308 00:34:56.754348 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-editor-role)\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(2ab662059bb326d13a07bf5700e4f545)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(2ab662059bb326d13a07bf5700e4f545)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: " to "All is well"

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(2ab662059bb326d13a07bf5700e4f545)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: "

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_1307ceb6-5d25-4d7a-868b-7a0dc976018d became leader

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerWorkloadDegraded: ",Available changed from True to False ("")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerWorkloadDegraded: " to "All is well",Available changed from False to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-manager-role)\nCatalogdStaticResourcesDegraded: "
(x5)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy

openshift-console

kubelet

console-c45bf598-vngbg

FailedPreStopHook

PreStopHook failed

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-manager-role)\nCatalogdStaticResourcesDegraded: " to "All is well"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

PodCreated

Created Pod/installer-5-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

installer-6-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-apiserver

kubelet

installer-5-retry-1-master-0

Created

Created container: installer

openshift-kube-apiserver

multus

installer-5-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-6-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

multus

installer-6-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-5-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

installer-5-retry-1-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-6-retry-1-master-0

Started

Started container installer
(x2)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-kube-scheduler

kubelet

installer-6-retry-1-master-0

Created

Created container: installer

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-kube-controller-manager

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler

static-pod-installer

installer-6-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_0381e811-5654-401d-8b5e-de543f6f5834 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

openshift-kube-apiserver-operator

InstallerPodFailed

Failed to create installer pod for revision 5 count 1 on node "master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-retry-1-master-0": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigControllerFailed

Failed to resync 4.18.34 because: failed to apply machine config controller manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/machine-config-controller": dial tcp 172.30.0.1:443: connect: connection refused

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

EtcdEndpointsErrorUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-scheduler

cert-recovery-controller

openshift-kube-scheduler

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ScriptControllerErrorUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

openshift-network-node-identity

master-0_0514de66-d87e-4c7a-ad53-43bc512567ce

ovnkube-identity

LeaderElection

master-0_0514de66-d87e-4c7a-ad53-43bc512567ce became leader

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_033adf70-3f3f-4669-8ce8-833f55536d13 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_2cc4b07f-7ecc-4829-93f3-8d447f3f6a7d became leader
(x13)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.34 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_283df1f8-9bea-4c0a-88a1-4e6b51381522 became leader

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6479f6d896 to 0 from 1

openshift-console

replicaset-controller

console-6479f6d896

SuccessfulDelete

Deleted pod: console-6479f6d896-j6kqz

openshift-cloud-controller-manager-operator

master-0_38a36fa1-ace1-49e2-97b1-1026691d0c97

cluster-cloud-config-sync-leader

LeaderElection

master-0_38a36fa1-ace1-49e2-97b1-1026691d0c97 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from False to True ("WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:28.736314 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746160 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:28.746245 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.746283 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:28.749010 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0308 00:33:38.755548 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0308 00:34:02.751670 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:22.750646 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:42.753781 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0308 00:34:56.754298 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0308 00:34:56.754348 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: ")

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-qldx6_79fa0a8e-4a84-481c-a99a-0dc7af80c63c

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-qldx6_79fa0a8e-4a84-481c-a99a-0dc7af80c63c became leader

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-qldx6_79fa0a8e-4a84-481c-a99a-0dc7af80c63c

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-qldx6_79fa0a8e-4a84-481c-a99a-0dc7af80c63c became leader

openshift-cluster-machine-approver

master-0_4cc3776d-8149-4e85-a289-d062b4243d4d

cluster-machine-approver-leader

LeaderElection

master-0_4cc3776d-8149-4e85-a289-d062b4243d4d became leader

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q_fe833407-819b-4de9-899a-c9da97b1719a

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-w2q2q_fe833407-819b-4de9-899a-c9da97b1719a became leader

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-w2q2q_fe833407-819b-4de9-899a-c9da97b1719a

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-w2q2q_fe833407-819b-4de9-899a-c9da97b1719a became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0308 00:33:11.728460 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745412 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0308 00:33:11.745526 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.745546 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0308 00:33:11.832944 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0308 00:33:41.833828 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0308 00:33:55.836422 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ")

openshift-operator-controller

operator-controller-controller-manager-6598bfb6c4-7nhvs_3c6e05bf-f241-4706-a5a6-f3d02f10551a

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-6598bfb6c4-7nhvs_3c6e05bf-f241-4706-a5a6-f3d02f10551a became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-8krst_c184cde3-5a08-4bf4-aa35-243e0ce09f63

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-8krst_c184cde3-5a08-4bf4-aa35-243e0ce09f63 became leader

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-8krst_c184cde3-5a08-4bf4-aa35-243e0ce09f63

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-8krst_c184cde3-5a08-4bf4-aa35-243e0ce09f63 became leader

openshift-cloud-controller-manager-operator

master-0_3014af00-2fd9-4675-9670-6ea45fbe8ea7

cluster-cloud-controller-manager-leader

LeaderElection

master-0_3014af00-2fd9-4675-9670-6ea45fbe8ea7 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

openshift-kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 5 because static pod is ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 5 to 6 because static pod is ready

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"3be405ec-b2bb-4184-b6a6-a91dbc1f4698\", ResourceVersion:\"16368\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 8, 0, 14, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 8, 0, 31, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002ff50f8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from True to False ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"3be405ec-b2bb-4184-b6a6-a91dbc1f4698\", ResourceVersion:\"16368\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 8, 0, 14, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 8, 0, 31, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002ff50f8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-prunecontroller

openshift-kube-scheduler-operator

PodCreated

Created Pod/revision-pruner-6-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

openshift-kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-kube-scheduler

multus

revision-pruner-6-master-0

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

revision-pruner-6-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

revision-pruner-6-master-0

Created

Created container: pruner

openshift-kube-scheduler

kubelet

revision-pruner-6-master-0

Started

Started container pruner

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

openshift-kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5")

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_c66d4fdf-0688-4d6c-832d-57951d84e596 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-kube-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_beda19de-3ddb-4fb4-a463-9e110c3de21a became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulCreate

Created pod: sushy-emulator-78f6d7d749-mx5qs

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-78f6d7d749 to 1

sushy-emulator

default-scheduler

sushy-emulator-78f6d7d749-mx5qs

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-78f6d7d749-mx5qs to master-0

sushy-emulator

multus

sushy-emulator-78f6d7d749-mx5qs

AddedInterface

Add eth0 [10.128.0.114/23] from ovn-kubernetes

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-mx5qs

Pulling

Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490"

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-mx5qs

Created

Created container: sushy-emulator

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-mx5qs

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" in 7.972s (7.972s including waiting). Image size: 325685589 bytes.

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-mx5qs

Started

Started container sushy-emulator

sushy-emulator

default-scheduler

nova-console-poller-5959594f9c-mqqwc

Scheduled

Successfully assigned sushy-emulator/nova-console-poller-5959594f9c-mqqwc to master-0

sushy-emulator

deployment-controller

nova-console-poller

ScalingReplicaSet

Scaled up replica set nova-console-poller-5959594f9c to 1

sushy-emulator

replicaset-controller

nova-console-poller-5959594f9c

SuccessfulCreate

Created pod: nova-console-poller-5959594f9c-mqqwc

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

multus

nova-console-poller-5959594f9c-mqqwc

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Created

Created container: console-poller-36751dd5-7cb2-4df6-8ddb-76ca385931f1

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Started

Started container console-poller-36751dd5-7cb2-4df6-8ddb-76ca385931f1

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 5.311s (5.311s including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 506ms (506ms including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Created

Created container: console-poller-5b32303a-ef2d-41b3-aa30-4a7ae476923e

sushy-emulator

kubelet

nova-console-poller-5959594f9c-mqqwc

Started

Started container console-poller-5b32303a-ef2d-41b3-aa30-4a7ae476923e

sushy-emulator

default-scheduler

nova-console-recorder-7bdc7f66d5-t9l4t

Scheduled

Successfully assigned sushy-emulator/nova-console-recorder-7bdc7f66d5-t9l4t to master-0

sushy-emulator

replicaset-controller

nova-console-recorder-7bdc7f66d5

SuccessfulCreate

Created pod: nova-console-recorder-7bdc7f66d5-t9l4t

sushy-emulator

deployment-controller

nova-console-recorder

ScalingReplicaSet

Scaled up replica set nova-console-recorder-7bdc7f66d5 to 1

sushy-emulator

multus

nova-console-recorder-7bdc7f66d5-t9l4t

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 9.332s (9.332s including waiting). Image size: 664134874 bytes.

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Created

Created container: console-recorder-36751dd5-7cb2-4df6-8ddb-76ca385931f1

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Started

Started container console-recorder-5b32303a-ef2d-41b3-aa30-4a7ae476923e

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Started

Started container console-recorder-36751dd5-7cb2-4df6-8ddb-76ca385931f1

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 420ms (420ms including waiting). Image size: 664134874 bytes.

sushy-emulator

kubelet

nova-console-recorder-7bdc7f66d5-t9l4t

Created

Created container: console-recorder-5b32303a-ef2d-41b3-aa30-4a7ae476923e

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-b6c50fdc874fee89ac3607a4efbb0edd successfully generated (release version: 4.18.34, controller version: d4eb710b17481f468c73d93c876a385253a863e0)

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-e0ac0e6f4f919390c829477b0bc3cb24 successfully generated (release version: 4.18.34, controller version: d4eb710b17481f468c73d93c876a385253a863e0)

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

SetDesiredConfig

Targeted node master-0 to MachineConfig: rendered-master-e0ac0e6f4f919390c829477b0bc3cb24

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-e0ac0e6f4f919390c829477b0bc3cb24

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStopped

Config Drift Monitor stopped

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Working

openshift-machine-config-operator

machineconfigdaemon

master-0

Drain

Drain not required, skipping

openshift-machine-config-operator

machineconfigdaemon

master-0

AddSigtermProtection

Adding SIGTERM protection

openshift-machine-config-operator

machineconfigdaemon

master-0

ServiceReload

Config changes do not require reboot. Service crio was reloaded.

openshift-machine-config-operator

machineconfigdaemon

master-0

ServiceReload

Config changes do not require reboot. Service crio.service was reloaded.

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-e0ac0e6f4f919390c829477b0bc3cb24 to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

RemoveSigtermProtection

Removing SIGTERM protection

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-e0ac0e6f4f919390c829477b0bc3cb24 and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-e0ac0e6f4f919390c829477b0bc3cb24

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

openshift-marketplace

default-scheduler

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths to master-0

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Created

Created container: util

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

AddedInterface

Add eth0 [10.128.0.117/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.165s (1.165s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Started

Started container pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Created

Created container: pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Created

Created container: extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Started

Started container extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zjths

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

replicaset-controller

lvms-operator-fcd55dd45

SuccessfulCreate

Created pod: lvms-operator-fcd55dd45-6z56x

openshift-storage

default-scheduler

lvms-operator-fcd55dd45-6z56x

Scheduled

Successfully assigned openshift-storage/lvms-operator-fcd55dd45-6z56x to master-0

openshift-storage

replicaset-controller

lvms-operator-fcd55dd45

SuccessfulCreate

Created pod: lvms-operator-fcd55dd45-6z56x

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-fcd55dd45 to 1
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-fcd55dd45 to 1

openshift-storage

default-scheduler

lvms-operator-fcd55dd45-6z56x

Scheduled

Successfully assigned openshift-storage/lvms-operator-fcd55dd45-6z56x to master-0
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

multus

lvms-operator-fcd55dd45-6z56x

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

multus

lvms-operator-fcd55dd45-6z56x

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.229s (5.229s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Started

Started container manager

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.229s (5.229s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-fcd55dd45-6z56x

Started

Started container manager

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-marketplace

default-scheduler

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Scheduled

Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h to master-0

openshift-marketplace

default-scheduler

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Scheduled

Successfully assigned openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8 to master-0

openshift-marketplace

job-controller

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f421346

SuccessfulCreate

Created pod: d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

openshift-marketplace

multus

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

SuccessfulCreate

Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Started

Started container util

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Created

Created container: util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908"

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Created

Created container: util

openshift-marketplace

multus

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Started

Started container util

openshift-marketplace

default-scheduler

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Scheduled

Successfully assigned openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76 to master-0

openshift-marketplace

job-controller

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a824662b

SuccessfulCreate

Created pod: 0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:2d751ef9609ce7a75d216ef5bee7417f143f8584d795cb8bf9f5df6f7e99c62f"

openshift-marketplace

multus

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:a73534482ccfeb0a712fe08fad5283873b7a53c4aacd0a1d20cce7661b5924e6"

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Started

Started container util

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Created

Created container: util

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Started

Started container pull

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:a73534482ccfeb0a712fe08fad5283873b7a53c4aacd0a1d20cce7661b5924e6" in 1.8s (1.8s including waiting). Image size: 255828 bytes.

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Started

Started container pull

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Created

Created container: pull

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:2d751ef9609ce7a75d216ef5bee7417f143f8584d795cb8bf9f5df6f7e99c62f" in 2.678s (2.678s including waiting). Image size: 408551 bytes.

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 3.7s (3.7s including waiting). Image size: 108352841 bytes.

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Created

Created container: pull

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Started

Started container pull

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

SuccessfulCreate

Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

default-scheduler

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Scheduled

Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz to master-0

openshift-marketplace

multus

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Created

Created container: util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Started

Started container extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Started

Started container util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6"

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Started

Started container extract

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Started

Started container extract

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82xqr76

Created

Created container: extract

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4z6tq8

Created

Created container: extract

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54k84h

Created

Created container: extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Created

Created container: pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Started

Started container pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.565s (1.565s including waiting). Image size: 4900233 bytes.

openshift-marketplace

job-controller

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f421346

Completed

Job completed

openshift-marketplace

job-controller

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a824662b

Completed

Job completed

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

Completed

Job completed

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Started

Started container extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089lppz

Created

Created container: extract

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

Completed

Job completed

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsNotMet

one or more requirements couldn't be found

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsNotMet

one or more requirements couldn't be found

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsUnknown

requirements not yet checked

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-lzq2v

cert-manager

default-scheduler

cert-manager-webhook-6888856db4-lzq2v

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-lzq2v to master-0

cert-manager

default-scheduler

cert-manager-webhook-6888856db4-lzq2v

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-lzq2v to master-0
(x8)

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

FailedCreate

Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-lzq2v

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1
(x8)

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

FailedCreate

Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found
(x9)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

multus

cert-manager-webhook-6888856db4-lzq2v

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes
(x9)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-webhook-6888856db4-lzq2v

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

cert-manager

default-scheduler

cert-manager-cainjector-5545bd876-vc4q5

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-vc4q5 to master-0

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-vc4q5

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

default-scheduler

cert-manager-cainjector-5545bd876-vc4q5

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-vc4q5 to master-0

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

multus

cert-manager-cainjector-5545bd876-vc4q5

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-cainjector-5545bd876-vc4q5

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-vc4q5

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

RequirementsUnknown

requirements not yet checked

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

RequirementsUnknown

requirements not yet checked

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

AllRequirementsMet

all requirements found, attempting install

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

AllRequirementsMet

all requirements found, attempting install

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

AllRequirementsMet

all requirements found, attempting install
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-75c5dccd6c to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

default-scheduler

nmstate-operator-75c5dccd6c-qs2gr

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-75c5dccd6c-qs2gr to master-0

openshift-nmstate

replicaset-controller

nmstate-operator-75c5dccd6c

SuccessfulCreate

Created pod: nmstate-operator-75c5dccd6c-qs2gr

openshift-nmstate

default-scheduler

nmstate-operator-75c5dccd6c-qs2gr

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-75c5dccd6c-qs2gr to master-0

openshift-nmstate

replicaset-controller

nmstate-operator-75c5dccd6c

SuccessfulCreate

Created pod: nmstate-operator-75c5dccd6c-qs2gr

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-75c5dccd6c to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-86db79fc85 to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-86db79fc85 to 1

metallb-system

replicaset-controller

metallb-operator-controller-manager-86db79fc85

SuccessfulCreate

Created pod: metallb-operator-controller-manager-86db79fc85-g44m9

metallb-system

default-scheduler

metallb-operator-controller-manager-86db79fc85-g44m9

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-86db79fc85-g44m9 to master-0

metallb-system

replicaset-controller

metallb-operator-controller-manager-86db79fc85

SuccessfulCreate

Created pod: metallb-operator-controller-manager-86db79fc85-g44m9

metallb-system

default-scheduler

metallb-operator-controller-manager-86db79fc85-g44m9

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-86db79fc85-g44m9 to master-0

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-5bc86b5b94 to 1

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-5bc86b5b94 to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-5bc86b5b94

SuccessfulCreate

Created pod: metallb-operator-webhook-server-5bc86b5b94-cmsdd

metallb-system

default-scheduler

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-5bc86b5b94-cmsdd to master-0

metallb-system

replicaset-controller

metallb-operator-webhook-server-5bc86b5b94

SuccessfulCreate

Created pod: metallb-operator-webhook-server-5bc86b5b94-cmsdd

metallb-system

default-scheduler

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-5bc86b5b94-cmsdd to master-0
(x11)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found
(x11)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 8.822s (8.822s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.578s (6.578s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-lzq2v

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 8.822s (8.822s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vc4q5

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.578s (6.578s including waiting). Image size: 319887149 bytes.
(x2)

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

metallb-system

multus

metallb-operator-controller-manager-86db79fc85-g44m9

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-operator-75c5dccd6c-qs2gr

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809"

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9"

metallb-system

multus

metallb-operator-webhook-server-5bc86b5b94-cmsdd

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes
(x2)

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

metallb-system

multus

metallb-operator-webhook-server-5bc86b5b94-cmsdd

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225"

openshift-nmstate

multus

nmstate-operator-75c5dccd6c-qs2gr

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9"

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809"

metallb-system

multus

metallb-operator-controller-manager-86db79fc85-g44m9

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225"

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

Webhook install failed: conversionWebhook not ready

kube-system

cert-manager-cainjector-5545bd876-vc4q5_dcd71c49-7b83-4288-a4f8-a906fb7bea22

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-5545bd876-vc4q5_dcd71c49-7b83-4288-a4f8-a906fb7bea22 became leader

metallb-system

operator-lifecycle-manager

install-hlzfc

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202602140741" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

install-hlzfc

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202602140741" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

Webhook install failed: conversionWebhook not ready
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

waiting for install components to report healthy

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

cert-manager

default-scheduler

cert-manager-545d4d4674-8h4v6

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-8h4v6 to master-0

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-8h4v6

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-8h4v6

cert-manager

default-scheduler

cert-manager-545d4d4674-8h4v6

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-8h4v6 to master-0
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

waiting for install components to report healthy
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

openshift-operators

default-scheduler

observability-operator-59bdc8b94-7ldjw

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-7ldjw to master-0

openshift-operators

default-scheduler

obo-prometheus-operator-68bc856cb9-lzbg5

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-lzbg5 to master-0

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-lzbg5

openshift-operators

default-scheduler

obo-prometheus-operator-68bc856cb9-lzbg5

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-lzbg5 to master-0

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-lzbg5

openshift-operators

default-scheduler

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq to master-0

openshift-operators

default-scheduler

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x to master-0

openshift-operators

default-scheduler

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x to master-0

openshift-operators

default-scheduler

observability-operator-59bdc8b94-7ldjw

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-7ldjw to master-0

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7764df74c5

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7764df74c5

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-rm2zk

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-7764df74c5 to 2

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7764df74c5

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-7764df74c5 to 2

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

openshift-operators

default-scheduler

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq to master-0

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-7ldjw

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

default-scheduler

perses-operator-5bf474d74f-rm2zk

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-rm2zk to master-0

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-7ldjw

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-rm2zk

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7764df74c5

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

openshift-operators

default-scheduler

perses-operator-5bf474d74f-rm2zk

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-rm2zk to master-0

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809" in 7.687s (7.687s including waiting). Image size: 451492486 bytes.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809" in 7.687s (7.687s including waiting). Image size: 451492486 bytes.

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" in 7.856s (7.856s including waiting). Image size: 555109584 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225" in 7.703s (7.703s including waiting). Image size: 462535787 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225" in 7.703s (7.703s including waiting). Image size: 462535787 bytes.

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" in 7.856s (7.856s including waiting). Image size: 555109584 bytes.

cert-manager

multus

cert-manager-545d4d4674-8h4v6

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

metallb-system

metallb-operator-controller-manager-86db79fc85-g44m9_db0bfdb3-ba10-4764-bf57-f5ff86c12a68

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-86db79fc85-g44m9_db0bfdb3-ba10-4764-bf57-f5ff86c12a68 became leader

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Started

Started container manager

metallb-system

metallb-operator-controller-manager-86db79fc85-g44m9_db0bfdb3-ba10-4764-bf57-f5ff86c12a68

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-86db79fc85-g44m9_db0bfdb3-ba10-4764-bf57-f5ff86c12a68 became leader

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Started

Started container nmstate-operator

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Created

Created container: manager

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Created

Created container: nmstate-operator

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Started

Started container manager

cert-manager

kubelet

cert-manager-545d4d4674-8h4v6

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

openshift-operators

multus

perses-operator-5bf474d74f-rm2zk

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Created

Created container: manager

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Started

Started container webhook-server

openshift-operators

multus

observability-operator-59bdc8b94-7ldjw

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Created

Created container: nmstate-operator

openshift-operators

multus

perses-operator-5bf474d74f-rm2zk

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Created

Created container: webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Started

Started container webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-5bc86b5b94-cmsdd

Created

Created container: webhook-server

openshift-operators

multus

observability-operator-59bdc8b94-7ldjw

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-545d4d4674-8h4v6

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

cert-manager

multus

cert-manager-545d4d4674-8h4v6

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-qs2gr

Started

Started container nmstate-operator

cert-manager

kubelet

cert-manager-545d4d4674-8h4v6

Started

Started container cert-manager-controller

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-lzbg5

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

install strategy completed with no errors

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

install strategy completed with no errors

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

cert-manager

kubelet

cert-manager-545d4d4674-8h4v6

Created

Created container: cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-8h4v6

Started

Started container cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-8h4v6

Created

Created container: cert-manager-controller

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-lzbg5

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 12.478s (12.478s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Created

Created container: perses-operator

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 13.067s (13.067s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.421s (12.421s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.421s (12.421s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Created

Created container: prometheus-operator

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Created

Created container: operator

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 13.067s (13.067s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.838s (12.838s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.838s (12.838s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.385s (12.385s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Created

Created container: prometheus-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Created

Created container: perses-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 12.478s (12.478s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.385s (12.385s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Created

Created container: operator

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Started

Started container operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Started

Started container perses-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-rm2zk

Started

Started container perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-mfxhq

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-7ldjw

Started

Started container operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7764df74c5-vtt2x

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-lzbg5

Started

Started container prometheus-operator

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-545d4d4674-8h4v6-external-cert-manager-controller became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

install strategy completed with no errors

metallb-system

default-scheduler

frr-k8s-vb6dz

Scheduled

Successfully assigned metallb-system/frr-k8s-vb6dz to master-0

metallb-system

default-scheduler

frr-k8s-webhook-server-7f989f654f-njhxq

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-7f989f654f-njhxq to master-0

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 3a20b3fd-d78c-45f6-9967-c5407197e0cb] does not exist in namespace ""

metallb-system

default-scheduler

controller-86ddb6bd46-mpsmp

Scheduled

Successfully assigned metallb-system/controller-86ddb6bd46-mpsmp to master-0

metallb-system

replicaset-controller

frr-k8s-webhook-server-7f989f654f

SuccessfulCreate

Created pod: frr-k8s-webhook-server-7f989f654f-njhxq

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-7f989f654f to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-vb6dz

metallb-system

replicaset-controller

frr-k8s-webhook-server-7f989f654f

SuccessfulCreate

Created pod: frr-k8s-webhook-server-7f989f654f-njhxq

metallb-system

replicaset-controller

controller-86ddb6bd46

SuccessfulCreate

Created pod: controller-86ddb6bd46-mpsmp

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-7f989f654f to 1

metallb-system

default-scheduler

speaker-zhcsd

Scheduled

Successfully assigned metallb-system/speaker-zhcsd to master-0

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-86ddb6bd46 to 1

metallb-system

default-scheduler

frr-k8s-vb6dz

Scheduled

Successfully assigned metallb-system/frr-k8s-vb6dz to master-0

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-vb6dz

metallb-system

replicaset-controller

controller-86ddb6bd46

SuccessfulCreate

Created pod: controller-86ddb6bd46-mpsmp

metallb-system

default-scheduler

controller-86ddb6bd46-mpsmp

Scheduled

Successfully assigned metallb-system/controller-86ddb6bd46-mpsmp to master-0

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-86ddb6bd46 to 1

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-zhcsd

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "frr-k8s-webhook-server-cert" not found

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-zhcsd

metallb-system

default-scheduler

frr-k8s-webhook-server-7f989f654f-njhxq

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-7f989f654f-njhxq to master-0

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "frr-k8s-webhook-server-cert" not found

metallb-system

default-scheduler

speaker-zhcsd

Scheduled

Successfully assigned metallb-system/speaker-zhcsd to master-0
(x2)

metallb-system

kubelet

speaker-zhcsd

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

kubelet

frr-k8s-vb6dz

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"

metallb-system

kubelet

frr-k8s-vb6dz

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"
(x2)

metallb-system

kubelet

speaker-zhcsd

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5dcbbd79cf

SuccessfulCreate

Created pod: nmstate-console-plugin-5dcbbd79cf-5bcf4

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Created

Created container: controller

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-cb5f6487 to 1

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-69594cc75 to 1

openshift-nmstate

replicaset-controller

nmstate-webhook-786f45cff4

SuccessfulCreate

Created pod: nmstate-webhook-786f45cff4-v5hhx

openshift-console

replicaset-controller

console-cb5f6487

SuccessfulCreate

Created pod: console-cb5f6487-gmcnf

metallb-system

kubelet

speaker-zhcsd

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"

metallb-system

multus

frr-k8s-webhook-server-7f989f654f-njhxq

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-nmstate

default-scheduler

nmstate-webhook-786f45cff4-v5hhx

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-786f45cff4-v5hhx to master-0
(x7)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-69594cc75 to 1

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Started

Started container controller

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-786f45cff4 to 1

openshift-nmstate

default-scheduler

nmstate-console-plugin-5dcbbd79cf-5bcf4

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5bcf4 to master-0

openshift-nmstate

default-scheduler

nmstate-webhook-786f45cff4-v5hhx

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-786f45cff4-v5hhx to master-0

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5dcbbd79cf

SuccessfulCreate

Created pod: nmstate-console-plugin-5dcbbd79cf-5bcf4

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5dcbbd79cf to 1

openshift-nmstate

replicaset-controller

nmstate-metrics-69594cc75

SuccessfulCreate

Created pod: nmstate-metrics-69594cc75-xln25
(x6)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

openshift-nmstate

default-scheduler

nmstate-metrics-69594cc75-xln25

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-69594cc75-xln25 to master-0

openshift-nmstate

default-scheduler

nmstate-handler-d7nd4

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-d7nd4 to master-0

openshift-nmstate

replicaset-controller

nmstate-webhook-786f45cff4

SuccessfulCreate

Created pod: nmstate-webhook-786f45cff4-v5hhx

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-d7nd4

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-d7nd4

openshift-nmstate

default-scheduler

nmstate-metrics-69594cc75-xln25

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-69594cc75-xln25 to master-0

openshift-nmstate

default-scheduler

nmstate-handler-d7nd4

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-d7nd4 to master-0

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5dcbbd79cf to 1

metallb-system

kubelet

speaker-zhcsd

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-786f45cff4 to 1

metallb-system

multus

controller-86ddb6bd46-mpsmp

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

metallb-system

multus

controller-86ddb6bd46-mpsmp

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openshift-nmstate

default-scheduler

nmstate-console-plugin-5dcbbd79cf-5bcf4

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5bcf4 to master-0

default

endpoint-controller

nmstate-console-plugin

FailedToCreateEndpoint

Failed to create endpoint for service openshift-nmstate/nmstate-console-plugin: endpoints "nmstate-console-plugin" already exists

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Started

Started container controller

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"

metallb-system

multus

frr-k8s-webhook-server-7f989f654f-njhxq

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-nmstate

replicaset-controller

nmstate-metrics-69594cc75

SuccessfulCreate

Created pod: nmstate-metrics-69594cc75-xln25

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Created

Created container: controller

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

openshift-console

default-scheduler

console-cb5f6487-gmcnf

Scheduled

Successfully assigned openshift-console/console-cb5f6487-gmcnf to master-0

openshift-nmstate

multus

nmstate-console-plugin-5dcbbd79cf-5bcf4

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-metrics-69594cc75-xln25

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")
(x3)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdateFailed

Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5"

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-nmstate

multus

nmstate-webhook-786f45cff4-v5hhx

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-webhook-786f45cff4-v5hhx

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

metallb-system

kubelet

speaker-zhcsd

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"

metallb-system

kubelet

speaker-zhcsd

Started

Started container speaker

metallb-system

kubelet

speaker-zhcsd

Created

Created container: speaker

openshift-console

multus

console-cb5f6487-gmcnf

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Started

Started container kube-rbac-proxy

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.339s (1.339s including waiting). Image size: 465086330 bytes.

metallb-system

kubelet

speaker-zhcsd

Created

Created container: speaker

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Started

Started container kube-rbac-proxy

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-86ddb6bd46-mpsmp

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.339s (1.339s including waiting). Image size: 465086330 bytes.

openshift-nmstate

multus

nmstate-console-plugin-5dcbbd79cf-5bcf4

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5"

metallb-system

kubelet

speaker-zhcsd

Started

Started container speaker

metallb-system

kubelet

speaker-zhcsd

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-nmstate

multus

nmstate-metrics-69594cc75-xln25

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

metallb-system

kubelet

speaker-zhcsd

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.044s (1.044s including waiting). Image size: 465086330 bytes.

metallb-system

kubelet

speaker-zhcsd

Started

Started container kube-rbac-proxy

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.34, 1 replicas available")

openshift-console

kubelet

console-cb5f6487-gmcnf

Started

Started container console

metallb-system

kubelet

speaker-zhcsd

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-zhcsd

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-zhcsd

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.044s (1.044s including waiting). Image size: 465086330 bytes.

openshift-console

kubelet

console-cb5f6487-gmcnf

Created

Created container: console

openshift-console

kubelet

console-cb5f6487-gmcnf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

metallb-system

kubelet

speaker-zhcsd

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 7.553s (7.553s including waiting). Image size: 498677652 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 7.553s (7.553s including waiting). Image size: 498677652 bytes.

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 9.518s (9.518s including waiting). Image size: 662213339 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5" in 6.838s (6.838s including waiting). Image size: 453887352 bytes.

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 6.918s (6.918s including waiting). Image size: 498677652 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 8.319s (8.319s including waiting). Image size: 662213339 bytes.

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 9.518s (9.518s including waiting). Image size: 662213339 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5" in 6.838s (6.838s including waiting). Image size: 453887352 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Started

Started container nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 6.918s (6.918s including waiting). Image size: 498677652 bytes.

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 7.149s (7.149s including waiting). Image size: 498677652 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 8.319s (8.319s including waiting). Image size: 662213339 bytes.

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 7.149s (7.149s including waiting). Image size: 498677652 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-5bcf4

Started

Started container nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Started

Started container nmstate-handler

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Started

Started container frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-njhxq

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: cp-reloader

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-xln25

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-v5hhx

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-handler-d7nd4

Created

Created container: nmstate-handler

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: frr

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-c45bf598 to 0 from 1

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container frr

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: frr

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: controller

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container frr

openshift-console

kubelet

console-c45bf598-vngbg

Killing

Stopping container console

openshift-console

replicaset-controller

console-c45bf598

SuccessfulDelete

Deleted pod: console-c45bf598-vngbg

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container controller

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Unhealthy

Readiness probe failed: Get "http://10.128.0.127:8080/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.34, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.34, 2 replicas available"

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container controller

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

metallb-system

kubelet

metallb-operator-controller-manager-86db79fc85-g44m9

Unhealthy

Readiness probe failed: Get "http://10.128.0.127:8080/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: controller

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container reloader

metallb-system

kubelet

frr-k8s-vb6dz

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container reloader

metallb-system

kubelet

frr-k8s-vb6dz

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-vb6dz

Started

Started container frr-metrics

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-l6zlx

openshift-storage

default-scheduler

vg-manager-l6zlx

Scheduled

Successfully assigned openshift-storage/vg-manager-l6zlx to master-0

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-l6zlx

openshift-storage

default-scheduler

vg-manager-l6zlx

Scheduled

Successfully assigned openshift-storage/vg-manager-l6zlx to master-0

openshift-storage

multus

vg-manager-l6zlx

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes
(x12)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io

openshift-storage

multus

vg-manager-l6zlx

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes
(x12)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io
(x2)

openshift-storage

kubelet

vg-manager-l6zlx

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-l6zlx

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-l6zlx

Created

Created container: vg-manager
(x2)

openshift-storage

kubelet

vg-manager-l6zlx

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-l6zlx

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-l6zlx

Created

Created container: vg-manager

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openstack-operators

kubelet

openstack-operator-index-xvcfv

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-index:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

multus

openstack-operator-index-xvcfv

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-index-xvcfv

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

default-scheduler

openstack-operator-index-xvcfv

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-xvcfv to master-0

openstack-operators

kubelet

openstack-operator-index-xvcfv

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-index:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

default-scheduler

openstack-operator-index-xvcfv

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-xvcfv to master-0
(x6)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-xvcfv

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-xvcfv

Started

Started container registry-server
(x4)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.244.7:50051: connect: connection refused"

openstack-operators

kubelet

openstack-operator-index-xvcfv

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-index:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 7.629s (7.629s including waiting). Image size: 94041432 bytes.

openstack-operators

kubelet

openstack-operator-index-xvcfv

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-xvcfv

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-xvcfv

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-index:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 7.629s (7.629s including waiting). Image size: 94041432 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack-operators

job-controller

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e548788

SuccessfulCreate

Created pod: 63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

openstack-operators

job-controller

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e548788

SuccessfulCreate

Created pod: 63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

openstack-operators

default-scheduler

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Scheduled

Successfully assigned openstack-operators/63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84 to master-0

openstack-operators

default-scheduler

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Scheduled

Successfully assigned openstack-operators/63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84 to master-0

openstack-operators

multus

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Created

Created container: util

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openstack-operators

multus

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Started

Started container util

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Created

Created container: util

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Started

Started container util

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 228ms (228ms including waiting). Image size: 81926 bytes.

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Started

Started container pull

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Created

Created container: pull

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 228ms (228ms including waiting). Image size: 81926 bytes.

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Started

Started container pull

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Created

Created container: pull

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Created

Created container: extract

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Started

Started container extract

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Started

Started container extract

openstack-operators

kubelet

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e5sbm84

Created

Created container: extract

openstack-operators

job-controller

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e548788

Completed

Job completed

openstack-operators

job-controller

63c289b49d1df002e9410bfc78c42c1a81fdac5ac0156ab656e2a123e548788

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed...
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed...

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-5748c74587 to 1

openstack-operators

default-scheduler

openstack-operator-controller-init-5748c74587-hx2qk

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-5748c74587-hx2qk to master-0
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

replicaset-controller

openstack-operator-controller-init-5748c74587

SuccessfulCreate

Created pod: openstack-operator-controller-init-5748c74587-hx2qk

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-5748c74587 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-init-5748c74587

SuccessfulCreate

Created pod: openstack-operator-controller-init-5748c74587-hx2qk

openstack-operators

default-scheduler

openstack-operator-controller-init-5748c74587-hx2qk

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-5748c74587-hx2qk to master-0

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

multus

openstack-operator-controller-init-5748c74587-hx2qk

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-controller-init-5748c74587-hx2qk

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Created

Created container: operator

openstack-operators

openstack-operator-controller-init-5748c74587-hx2qk_fd0660d8-c9e1-400d-9d74-1ddc10565e0d

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-5748c74587-hx2qk_fd0660d8-c9e1-400d-9d74-1ddc10565e0d became leader

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 5.015s (5.015s including waiting). Image size: 293352349 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-init-5748c74587-hx2qk

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 5.015s (5.015s including waiting). Image size: 293352349 bytes.

openstack-operators

openstack-operator-controller-init-5748c74587-hx2qk_fd0660d8-c9e1-400d-9d74-1ddc10565e0d

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-5748c74587-hx2qk_fd0660d8-c9e1-400d-9d74-1ddc10565e0d became leader

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-ksw4n"

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-xtz6p"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-vr9nq"

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-mdn4d"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-mdn4d"

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-vr9nq"

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-ksw4n"

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-xtz6p"

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-mc7q2"

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-mc7q2"

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-kkpgt"

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-kkpgt"

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-fjc4f"

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-xmwlc"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-xmwlc"

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-fjc4f"

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-wzs6k"

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-wzs6k"

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-cf99c678f to 1

openstack-operators

default-scheduler

heat-operator-controller-manager-cf99c678f-8wjbv

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-cf99c678f-8wjbv to master-0

openstack-operators

replicaset-controller

infra-operator-controller-manager-b8c8d7cc8

SuccessfulCreate

Created pod: infra-operator-controller-manager-b8c8d7cc8-bhcdj

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-b8c8d7cc8 to 1

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-64db6967f8 to 1

openstack-operators

replicaset-controller

glance-operator-controller-manager-64db6967f8

SuccessfulCreate

Created pod: glance-operator-controller-manager-64db6967f8-9tzwz

openstack-operators

default-scheduler

glance-operator-controller-manager-64db6967f8-9tzwz

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-64db6967f8-9tzwz to master-0

openstack-operators

default-scheduler

heat-operator-controller-manager-cf99c678f-8wjbv

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-cf99c678f-8wjbv to master-0

openstack-operators

replicaset-controller

heat-operator-controller-manager-cf99c678f

SuccessfulCreate

Created pod: heat-operator-controller-manager-cf99c678f-8wjbv

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-78bc7f9bd9 to 1

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

horizon-operator-controller-manager-78bc7f9bd9

SuccessfulCreate

Created pod: horizon-operator-controller-manager-78bc7f9bd9-wq5pz

openstack-operators

default-scheduler

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wq5pz to master-0

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

default-scheduler

barbican-operator-controller-manager-6db6876945-j9sh4

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-6db6876945-j9sh4 to master-0

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

default-scheduler

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wq5pz to master-0

openstack-operators

replicaset-controller

horizon-operator-controller-manager-78bc7f9bd9

SuccessfulCreate

Created pod: horizon-operator-controller-manager-78bc7f9bd9-wq5pz

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-78bc7f9bd9 to 1

openstack-operators

default-scheduler

barbican-operator-controller-manager-6db6876945-j9sh4

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-6db6876945-j9sh4 to master-0

openstack-operators

replicaset-controller

barbican-operator-controller-manager-6db6876945

SuccessfulCreate

Created pod: barbican-operator-controller-manager-6db6876945-j9sh4

openstack-operators

default-scheduler

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-b8c8d7cc8-bhcdj to master-0

openstack-operators

replicaset-controller

infra-operator-controller-manager-b8c8d7cc8

SuccessfulCreate

Created pod: infra-operator-controller-manager-b8c8d7cc8-bhcdj

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-b8c8d7cc8 to 1

openstack-operators

default-scheduler

ironic-operator-controller-manager-686765764-jhdvn

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-686765764-jhdvn to master-0

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-5d87c9d997 to 1

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

default-scheduler

ironic-operator-controller-manager-686765764-jhdvn

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-686765764-jhdvn to master-0

openstack-operators

replicaset-controller

ironic-operator-controller-manager-686765764

SuccessfulCreate

Created pod: ironic-operator-controller-manager-686765764-jhdvn

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-686765764 to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-686765764

SuccessfulCreate

Created pod: ironic-operator-controller-manager-686765764-jhdvn

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-686765764 to 1

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

default-scheduler

keystone-operator-controller-manager-7c789f89c6-jrqk2

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-7c789f89c6-jrqk2 to master-0

openstack-operators

replicaset-controller

keystone-operator-controller-manager-7c789f89c6

SuccessfulCreate

Created pod: keystone-operator-controller-manager-7c789f89c6-jrqk2

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-7c789f89c6 to 1

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-6db6876945 to 1

openstack-operators

replicaset-controller

barbican-operator-controller-manager-6db6876945

SuccessfulCreate

Created pod: barbican-operator-controller-manager-6db6876945-j9sh4

openstack-operators

default-scheduler

manila-operator-controller-manager-67d996989d-xwlf2

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-xwlf2 to master-0

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-6db6876945 to 1

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-zvtv4

openstack-operators

default-scheduler

watcher-operator-controller-manager-bccc79885-zvtv4

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-zvtv4 to master-0

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-xwlf2

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

default-scheduler

keystone-operator-controller-manager-7c789f89c6-jrqk2

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-7c789f89c6-jrqk2 to master-0

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-55b5ff4dbb to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-55b5ff4dbb

SuccessfulCreate

Created pod: test-operator-controller-manager-55b5ff4dbb-56p6p

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-8vcc9"

openstack-operators

default-scheduler

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-l4w8t to master-0

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-7b6bfb6475

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-7b6bfb6475-l4w8t

openstack-operators

replicaset-controller

keystone-operator-controller-manager-7c789f89c6

SuccessfulCreate

Created pod: keystone-operator-controller-manager-7c789f89c6-jrqk2

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-7c789f89c6 to 1

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-7b6bfb6475 to 1

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-cf99c678f to 1

openstack-operators

default-scheduler

neutron-operator-controller-manager-54688575f-b6ldp

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-54688575f-b6ldp to master-0

openstack-operators

replicaset-controller

neutron-operator-controller-manager-54688575f

SuccessfulCreate

Created pod: neutron-operator-controller-manager-54688575f-b6ldp

openstack-operators

default-scheduler

test-operator-controller-manager-55b5ff4dbb-56p6p

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-55b5ff4dbb-56p6p to master-0

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-54688575f to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-cf99c678f

SuccessfulCreate

Created pod: heat-operator-controller-manager-cf99c678f-8wjbv

openstack-operators

default-scheduler

nova-operator-controller-manager-74b6b5dc96-vbkgr

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-74b6b5dc96-vbkgr to master-0

openstack-operators

replicaset-controller

nova-operator-controller-manager-74b6b5dc96

SuccessfulCreate

Created pod: nova-operator-controller-manager-74b6b5dc96-vbkgr

openstack-operators

default-scheduler

ovn-operator-controller-manager-75684d597f-nkfd6

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-75684d597f-nkfd6 to master-0

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

designate-operator-controller-manager-5d87c9d997

SuccessfulCreate

Created pod: designate-operator-controller-manager-5d87c9d997-5hmg6

openstack-operators

default-scheduler

designate-operator-controller-manager-5d87c9d997-5hmg6

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-5d87c9d997-5hmg6 to master-0

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-74b6b5dc96 to 1

openstack-operators

default-scheduler

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-b8c8d7cc8-bhcdj to master-0

openstack-operators

default-scheduler

manila-operator-controller-manager-67d996989d-xwlf2

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-xwlf2 to master-0

openstack-operators

default-scheduler

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-xgs7b to master-0

openstack-operators

replicaset-controller

octavia-operator-controller-manager-5d86c7ddb7

SuccessfulCreate

Created pod: octavia-operator-controller-manager-5d86c7ddb7-xgs7b

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-5d86c7ddb7 to 1

openstack-operators

default-scheduler

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-rrvfw to master-0

openstack-operators

replicaset-controller

ovn-operator-controller-manager-75684d597f

SuccessfulCreate

Created pod: ovn-operator-controller-manager-75684d597f-nkfd6

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-75684d597f to 1

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9b2zt"

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-xwlf2

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-5fdb694969 to 1

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-7c6767dc9c

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

openstack-operators

default-scheduler

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk to master-0

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-7c6767dc9c to 1

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-rrvfw

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-8vcc9"

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-5fdb694969

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-5fdb694969-sqmsb

openstack-operators

default-scheduler

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-l4w8t to master-0

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-85db8c7646 to 1

openstack-operators

default-scheduler

designate-operator-controller-manager-5d87c9d997-5hmg6

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-5d87c9d997-5hmg6 to master-0

openstack-operators

default-scheduler

telemetry-operator-controller-manager-5fdb694969-sqmsb

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-5fdb694969-sqmsb to master-0

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

default-scheduler

placement-operator-controller-manager-648564c9fc-qlhb7

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-648564c9fc-qlhb7 to master-0

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-9b9ff9f4d to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-9b9ff9f4d

SuccessfulCreate

Created pod: swift-operator-controller-manager-9b9ff9f4d-n6mwk

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-7b6bfb6475

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-7b6bfb6475-l4w8t

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-7b6bfb6475 to 1

openstack-operators

default-scheduler

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-rrvfw to master-0

openstack-operators

default-scheduler

neutron-operator-controller-manager-54688575f-b6ldp

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-54688575f-b6ldp to master-0

openstack-operators

default-scheduler

ovn-operator-controller-manager-75684d597f-nkfd6

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-75684d597f-nkfd6 to master-0

openstack-operators

replicaset-controller

ovn-operator-controller-manager-75684d597f

SuccessfulCreate

Created pod: ovn-operator-controller-manager-75684d597f-nkfd6

openstack-operators

default-scheduler

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-9b9ff9f4d-n6mwk to master-0

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-75684d597f to 1

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-rrvfw

openstack-operators

default-scheduler

placement-operator-controller-manager-648564c9fc-qlhb7

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-648564c9fc-qlhb7 to master-0

openstack-operators

replicaset-controller

neutron-operator-controller-manager-54688575f

SuccessfulCreate

Created pod: neutron-operator-controller-manager-54688575f-b6ldp

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-54688575f to 1

openstack-operators

replicaset-controller

placement-operator-controller-manager-648564c9fc

SuccessfulCreate

Created pod: placement-operator-controller-manager-648564c9fc-qlhb7

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-648564c9fc to 1

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

default-scheduler

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-9b9ff9f4d-n6mwk to master-0

openstack-operators

replicaset-controller

swift-operator-controller-manager-9b9ff9f4d

SuccessfulCreate

Created pod: swift-operator-controller-manager-9b9ff9f4d-n6mwk

openstack-operators

replicaset-controller

designate-operator-controller-manager-5d87c9d997

SuccessfulCreate

Created pod: designate-operator-controller-manager-5d87c9d997-5hmg6

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-9b9ff9f4d to 1

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

default-scheduler

telemetry-operator-controller-manager-5fdb694969-sqmsb

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-5fdb694969-sqmsb to master-0

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-5d87c9d997 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-5fdb694969

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-5fdb694969-sqmsb

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-5fdb694969 to 1

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

default-scheduler

nova-operator-controller-manager-74b6b5dc96-vbkgr

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-74b6b5dc96-vbkgr to master-0

openstack-operators

default-scheduler

test-operator-controller-manager-55b5ff4dbb-56p6p

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-55b5ff4dbb-56p6p to master-0

openstack-operators

replicaset-controller

test-operator-controller-manager-55b5ff4dbb

SuccessfulCreate

Created pod: test-operator-controller-manager-55b5ff4dbb-56p6p

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-648564c9fc to 1

openstack-operators

replicaset-controller

placement-operator-controller-manager-648564c9fc

SuccessfulCreate

Created pod: placement-operator-controller-manager-648564c9fc-qlhb7

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-55b5ff4dbb to 1

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

default-scheduler

watcher-operator-controller-manager-bccc79885-zvtv4

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-zvtv4 to master-0

openstack-operators

replicaset-controller

nova-operator-controller-manager-74b6b5dc96

SuccessfulCreate

Created pod: nova-operator-controller-manager-74b6b5dc96-vbkgr

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-74b6b5dc96 to 1

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-zvtv4

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

default-scheduler

glance-operator-controller-manager-64db6967f8-9tzwz

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-64db6967f8-9tzwz to master-0

openstack-operators

default-scheduler

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-xgs7b to master-0

openstack-operators

replicaset-controller

octavia-operator-controller-manager-5d86c7ddb7

SuccessfulCreate

Created pod: octavia-operator-controller-manager-5d86c7ddb7-xgs7b

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-5d86c7ddb7 to 1

openstack-operators

replicaset-controller

glance-operator-controller-manager-64db6967f8

SuccessfulCreate

Created pod: glance-operator-controller-manager-64db6967f8-9tzwz

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9b2zt"

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-7c6767dc9c

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

openstack-operators

default-scheduler

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk to master-0

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-7c6767dc9c to 1

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-64db6967f8 to 1

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-85db8c7646 to 1

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-47nn6"

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-6fnf7

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

barbican-operator-controller-manager-6db6876945-j9sh4

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120"

openstack-operators

multus

barbican-operator-controller-manager-6db6876945-j9sh4

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

default-scheduler

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6fnf7 to master-0

openstack-operators

multus

glance-operator-controller-manager-64db6967f8-9tzwz

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-47nn6"

openstack-operators

replicaset-controller

openstack-operator-controller-manager-85db8c7646

SuccessfulCreate

Created pod: openstack-operator-controller-manager-85db8c7646-pflgk

openstack-operators

default-scheduler

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6fnf7 to master-0

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120"

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-rrvfw

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

designate-operator-controller-manager-5d87c9d997-5hmg6

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-jx5f9"

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-jx5f9"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-rrvfw

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

default-scheduler

openstack-operator-controller-manager-85db8c7646-pflgk

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-85db8c7646-pflgk to master-0

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

default-scheduler

openstack-operator-controller-manager-85db8c7646-pflgk

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-85db8c7646-pflgk to master-0

openstack-operators

multus

glance-operator-controller-manager-64db6967f8-9tzwz

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-6fnf7

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-85db8c7646

SuccessfulCreate

Created pod: openstack-operator-controller-manager-85db8c7646-pflgk

openstack-operators

multus

designate-operator-controller-manager-5d87c9d997-5hmg6

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

multus

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053"

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

heat-operator-controller-manager-cf99c678f-8wjbv

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051"

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214"

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053"

openstack-operators

multus

heat-operator-controller-manager-cf99c678f-8wjbv

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051"

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-4hfcl"

openstack-operators

multus

keystone-operator-controller-manager-7c789f89c6-jrqk2

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-kg8gs"

openstack-operators

multus

ironic-operator-controller-manager-686765764-jhdvn

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/ironic-operator:3ac15a30b8dffc621c89f29bf4cf0301e4492c4e"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

nova-operator-controller-manager-74b6b5dc96-vbkgr

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-8qlkx"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

ironic-operator-controller-manager-686765764-jhdvn

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Pulling

Pulling image "38.102.83.110:5001/openstack-k8s-operators/ironic-operator:3ac15a30b8dffc621c89f29bf4cf0301e4492c4e"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-kg8gs"

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

multus

test-operator-controller-manager-55b5ff4dbb-56p6p

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

multus

placement-operator-controller-manager-648564c9fc-qlhb7

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

multus

nova-operator-controller-manager-74b6b5dc96-vbkgr

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4"

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-8qlkx"

openstack-operators

multus

neutron-operator-controller-manager-54688575f-b6ldp

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505"

openstack-operators

multus

manila-operator-controller-manager-67d996989d-xwlf2

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

multus

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c"

openstack-operators

multus

keystone-operator-controller-manager-7c789f89c6-jrqk2

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

multus

manila-operator-controller-manager-67d996989d-xwlf2

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

multus

neutron-operator-controller-manager-54688575f-b6ldp

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

swift-operator-controller-manager-9b9ff9f4d-n6mwk

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

multus

test-operator-controller-manager-55b5ff4dbb-56p6p

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4"

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-4hfcl"

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

multus

placement-operator-controller-manager-648564c9fc-qlhb7

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

multus

swift-operator-controller-manager-9b9ff9f4d-n6mwk

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

multus

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505"

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c"

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e"

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7"

openstack-operators

multus

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd"

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack-operators

multus

telemetry-operator-controller-manager-5fdb694969-sqmsb

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e"

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

multus

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c"

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968"

openstack-operators

multus

ovn-operator-controller-manager-75684d597f-nkfd6

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c"

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7"

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84"

openstack-operators

multus

ovn-operator-controller-manager-75684d597f-nkfd6

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-zvtv4

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84"

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-zvtv4

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack-operators

multus

telemetry-operator-controller-manager-5fdb694969-sqmsb

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6"

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-bcssx"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-mg8cj"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-zg9ld"

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-mg8cj"

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-bcssx"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-zg9ld"

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-mm7gp"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-mm7gp"

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-h77rr"

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-6hkpz"

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-h77rr"

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-6hkpz"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully
(x5)

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-fdz97"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-fdz97"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients
(x5)

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-c4fj7"

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-c4fj7"

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-nqsml"

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-nqsml"

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" in 28.133s (28.133s including waiting). Image size: 191115738 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" in 28.133s (28.133s including waiting). Image size: 191115738 bytes.

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" in 26.042s (26.042s including waiting). Image size: 188905402 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 28.376s (28.376s including waiting). Image size: 191425982 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" in 26.901s (26.901s including waiting). Image size: 193036438 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Created

Created container: manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 26.124s (26.124s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Started

Started container manager

openstack-operators

glance-operator-controller-manager-64db6967f8-9tzwz_44e56404-fe40-44fe-baec-8eaacb4adaa8

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-64db6967f8-9tzwz_44e56404-fe40-44fe-baec-8eaacb4adaa8 became leader

openstack-operators

watcher-operator-controller-manager-bccc79885-zvtv4_c9735028-66cc-479d-829d-2eff208f9006

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-zvtv4_c9735028-66cc-479d-829d-2eff208f9006 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6" in 26.094s (26.094s including waiting). Image size: 196200931 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" in 27.99s (27.99s including waiting). Image size: 195967461 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" in 26.042s (26.042s including waiting). Image size: 188905402 bytes.

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Created

Created container: manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-rrvfw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 28.376s (28.376s including waiting). Image size: 191425982 bytes.

openstack-operators

glance-operator-controller-manager-64db6967f8-9tzwz_44e56404-fe40-44fe-baec-8eaacb4adaa8

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-64db6967f8-9tzwz_44e56404-fe40-44fe-baec-8eaacb4adaa8 became leader

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" in 26.086s (26.086s including waiting). Image size: 192121261 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/ironic-operator:3ac15a30b8dffc621c89f29bf4cf0301e4492c4e" in 26.998s (26.998s including waiting). Image size: 191661876 bytes.

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" in 26.094s (26.094s including waiting). Image size: 193630055 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Pulled

Successfully pulled image "38.102.83.110:5001/openstack-k8s-operators/ironic-operator:3ac15a30b8dffc621c89f29bf4cf0301e4492c4e" in 26.998s (26.998s including waiting). Image size: 191661876 bytes.

openstack-operators

designate-operator-controller-manager-5d87c9d997-5hmg6_90bfb10f-ebac-448b-a8ec-481633c509c3

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-5d87c9d997-5hmg6_90bfb10f-ebac-448b-a8ec-481633c509c3 became leader

openstack-operators

watcher-operator-controller-manager-bccc79885-zvtv4_c9735028-66cc-479d-829d-2eff208f9006

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-zvtv4_c9735028-66cc-479d-829d-2eff208f9006 became leader

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Started

Started container manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051" in 27.958s (27.958s including waiting). Image size: 192004030 bytes.

openstack-operators

test-operator-controller-manager-55b5ff4dbb-56p6p_c532618b-080b-4d09-bdb1-36df2ebe9ce6

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-55b5ff4dbb-56p6p_c532618b-080b-4d09-bdb1-36df2ebe9ce6 became leader

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" in 26.901s (26.901s including waiting). Image size: 193036438 bytes.

openstack-operators

placement-operator-controller-manager-648564c9fc-qlhb7_3003eb40-68ca-4a5a-8287-a32f05b34ed8

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-648564c9fc-qlhb7_3003eb40-68ca-4a5a-8287-a32f05b34ed8 became leader

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-56p6p

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 25.91s (25.91s including waiting). Image size: 190936524 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" in 26.959s (26.959s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505" in 26.95s (26.95s including waiting). Image size: 189416143 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd" in 25.945s (25.945s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-jrqk2

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" in 27.298s (27.298s including waiting). Image size: 191606181 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Started

Started container manager

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 26.99s (26.99s including waiting). Image size: 191246784 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" in 27.99s (27.99s including waiting). Image size: 195967461 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:417a4ede6dce5d088ce7dc1ac6e9dab30f3b532bd5b506e2df65d6eaecbc7cb9"

openstack-operators

multus

infra-operator-controller-manager-b8c8d7cc8-bhcdj

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-5hmg6

Started

Started container manager

openstack-operators

ovn-operator-controller-manager-75684d597f-nkfd6_103d5a91-a8e5-4bce-865a-0da584f26476

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-75684d597f-nkfd6_103d5a91-a8e5-4bce-865a-0da584f26476 became leader

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6" in 26.094s (26.094s including waiting). Image size: 196200931 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Created

Created container: manager

openstack-operators

test-operator-controller-manager-55b5ff4dbb-56p6p_c532618b-080b-4d09-bdb1-36df2ebe9ce6

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-55b5ff4dbb-56p6p_c532618b-080b-4d09-bdb1-36df2ebe9ce6 became leader

openstack-operators

placement-operator-controller-manager-648564c9fc-qlhb7_3003eb40-68ca-4a5a-8287-a32f05b34ed8

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-648564c9fc-qlhb7_3003eb40-68ca-4a5a-8287-a32f05b34ed8 became leader

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" in 26.086s (26.086s including waiting). Image size: 192121261 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" in 27.283s (27.283s including waiting). Image size: 190376908 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" in 26.959s (26.959s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Created

Created container: manager

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 26.124s (26.124s including waiting). Image size: 176351298 bytes.

openstack-operators

designate-operator-controller-manager-5d87c9d997-5hmg6_90bfb10f-ebac-448b-a8ec-481633c509c3

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-5d87c9d997-5hmg6_90bfb10f-ebac-448b-a8ec-481633c509c3 became leader

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Created

Created container: manager

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-wq5pz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" in 27.283s (27.283s including waiting). Image size: 190376908 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e" in 26.06s (26.06s including waiting). Image size: 190626280 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Created

Created container: manager

openstack-operators

ovn-operator-controller-manager-75684d597f-nkfd6_103d5a91-a8e5-4bce-865a-0da584f26476

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-75684d597f-nkfd6_103d5a91-a8e5-4bce-865a-0da584f26476 became leader

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" in 26.094s (26.094s including waiting). Image size: 193630055 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051" in 27.958s (27.958s including waiting). Image size: 192004030 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 25.91s (25.91s including waiting). Image size: 190936524 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-9tzwz

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-qlhb7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e" in 26.06s (26.06s including waiting). Image size: 190626280 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-l4w8t

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505" in 26.95s (26.95s including waiting). Image size: 189416143 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" in 25.96s (25.96s including waiting). Image size: 190114712 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-8wjbv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" in 27.298s (27.298s including waiting). Image size: 191606181 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd" in 25.945s (25.945s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-xgs7b

Started

Started container manager

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-j9sh4

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-xwlf2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 26.99s (26.99s including waiting). Image size: 191246784 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-zvtv4

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-nkfd6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" in 25.96s (25.96s including waiting). Image size: 190114712 bytes.

openstack-operators

multus

infra-operator-controller-manager-b8c8d7cc8-bhcdj

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:417a4ede6dce5d088ce7dc1ac6e9dab30f3b532bd5b506e2df65d6eaecbc7cb9"

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Created

Created container: manager

openstack-operators

manila-operator-controller-manager-67d996989d-xwlf2_08de1fbb-8bdb-434c-a171-7d1fcffe81d1

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-xwlf2_08de1fbb-8bdb-434c-a171-7d1fcffe81d1 became leader

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Started

Started container manager

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-6fnf7_d4d41c47-df79-4707-bf3c-1d044c9ddd2b

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-6fnf7_d4d41c47-df79-4707-bf3c-1d044c9ddd2b became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Created

Created container: operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Started

Started container operator

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Created

Created container: manager

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-rrvfw_3bac8b1e-aff1-4cff-924f-ab473cce4b8b

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-rrvfw_3bac8b1e-aff1-4cff-924f-ab473cce4b8b became leader

openstack-operators

octavia-operator-controller-manager-5d86c7ddb7-xgs7b_402c8c68-0028-43a5-b36d-289bbb1f8274

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-5d86c7ddb7-xgs7b_402c8c68-0028-43a5-b36d-289bbb1f8274 became leader

openstack-operators

neutron-operator-controller-manager-54688575f-b6ldp_cec56b32-f1fd-4e14-a35e-4685fd0aca75

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-54688575f-b6ldp_cec56b32-f1fd-4e14-a35e-4685fd0aca75 became leader

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Created

Created container: manager

openstack-operators

barbican-operator-controller-manager-6db6876945-j9sh4_c0c7e220-c2f8-466a-b36e-bff870ca46b1

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-6db6876945-j9sh4_c0c7e220-c2f8-466a-b36e-bff870ca46b1 became leader

openstack-operators

heat-operator-controller-manager-cf99c678f-8wjbv_aeedd659-649e-4cf7-8e31-262e66c7b8a6

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-cf99c678f-8wjbv_aeedd659-649e-4cf7-8e31-262e66c7b8a6 became leader

openstack-operators

telemetry-operator-controller-manager-5fdb694969-sqmsb_e98011ed-78c1-45ec-a40c-8556428f0c0c

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-5fdb694969-sqmsb_e98011ed-78c1-45ec-a40c-8556428f0c0c became leader

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Started

Started container manager

openstack-operators

ironic-operator-controller-manager-686765764-jhdvn_eb56c8d3-39bf-4d29-96ca-ecd3bd0fefc6

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-686765764-jhdvn_eb56c8d3-39bf-4d29-96ca-ecd3bd0fefc6 became leader

openstack-operators

nova-operator-controller-manager-74b6b5dc96-vbkgr_9244f50b-2672-4961-b34d-ae1fdd090601

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-74b6b5dc96-vbkgr_9244f50b-2672-4961-b34d-ae1fdd090601 became leader

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-6fnf7_d4d41c47-df79-4707-bf3c-1d044c9ddd2b

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-6fnf7_d4d41c47-df79-4707-bf3c-1d044c9ddd2b became leader

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Created

Created container: operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-6fnf7

Started

Started container operator

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Started

Started container manager

openstack-operators

mariadb-operator-controller-manager-7b6bfb6475-l4w8t_f4c752f6-f8ce-4b14-b3f2-c303b557d437

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-7b6bfb6475-l4w8t_f4c752f6-f8ce-4b14-b3f2-c303b557d437 became leader

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-n6mwk

Started

Started container manager

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-rrvfw_3bac8b1e-aff1-4cff-924f-ab473cce4b8b

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-rrvfw_3bac8b1e-aff1-4cff-924f-ab473cce4b8b became leader

openstack-operators

octavia-operator-controller-manager-5d86c7ddb7-xgs7b_402c8c68-0028-43a5-b36d-289bbb1f8274

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-5d86c7ddb7-xgs7b_402c8c68-0028-43a5-b36d-289bbb1f8274 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Created

Created container: manager

openstack-operators

neutron-operator-controller-manager-54688575f-b6ldp_cec56b32-f1fd-4e14-a35e-4685fd0aca75

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-54688575f-b6ldp_cec56b32-f1fd-4e14-a35e-4685fd0aca75 became leader

openstack-operators

swift-operator-controller-manager-9b9ff9f4d-n6mwk_78796502-1bb6-4762-8d89-5b1b90cb0f5b

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-9b9ff9f4d-n6mwk_78796502-1bb6-4762-8d89-5b1b90cb0f5b became leader

openstack-operators

horizon-operator-controller-manager-78bc7f9bd9-wq5pz_e135aef2-7220-431e-90dc-cfc815650ec7

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-78bc7f9bd9-wq5pz_e135aef2-7220-431e-90dc-cfc815650ec7 became leader

openstack-operators

swift-operator-controller-manager-9b9ff9f4d-n6mwk_78796502-1bb6-4762-8d89-5b1b90cb0f5b

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-9b9ff9f4d-n6mwk_78796502-1bb6-4762-8d89-5b1b90cb0f5b became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-sqmsb

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Started

Started container manager

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-b6ldp

Created

Created container: manager

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Created

Created container: manager

openstack-operators

manila-operator-controller-manager-67d996989d-xwlf2_08de1fbb-8bdb-434c-a171-7d1fcffe81d1

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-xwlf2_08de1fbb-8bdb-434c-a171-7d1fcffe81d1 became leader

openstack-operators

barbican-operator-controller-manager-6db6876945-j9sh4_c0c7e220-c2f8-466a-b36e-bff870ca46b1

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-6db6876945-j9sh4_c0c7e220-c2f8-466a-b36e-bff870ca46b1 became leader

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Started

Started container manager

openstack-operators

mariadb-operator-controller-manager-7b6bfb6475-l4w8t_f4c752f6-f8ce-4b14-b3f2-c303b557d437

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-7b6bfb6475-l4w8t_f4c752f6-f8ce-4b14-b3f2-c303b557d437 became leader

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-vbkgr

Created

Created container: manager

openstack-operators

keystone-operator-controller-manager-7c789f89c6-jrqk2_6410f769-a34c-4890-8a20-9b4a6e53ad62

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-7c789f89c6-jrqk2_6410f769-a34c-4890-8a20-9b4a6e53ad62 became leader

openstack-operators

heat-operator-controller-manager-cf99c678f-8wjbv_aeedd659-649e-4cf7-8e31-262e66c7b8a6

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-cf99c678f-8wjbv_aeedd659-649e-4cf7-8e31-262e66c7b8a6 became leader

openstack-operators

horizon-operator-controller-manager-78bc7f9bd9-wq5pz_e135aef2-7220-431e-90dc-cfc815650ec7

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-78bc7f9bd9-wq5pz_e135aef2-7220-431e-90dc-cfc815650ec7 became leader

openstack-operators

telemetry-operator-controller-manager-5fdb694969-sqmsb_e98011ed-78c1-45ec-a40c-8556428f0c0c

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-5fdb694969-sqmsb_e98011ed-78c1-45ec-a40c-8556428f0c0c became leader

openstack-operators

keystone-operator-controller-manager-7c789f89c6-jrqk2_6410f769-a34c-4890-8a20-9b4a6e53ad62

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-7c789f89c6-jrqk2_6410f769-a34c-4890-8a20-9b4a6e53ad62 became leader

openstack-operators

ironic-operator-controller-manager-686765764-jhdvn_eb56c8d3-39bf-4d29-96ca-ecd3bd0fefc6

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-686765764-jhdvn_eb56c8d3-39bf-4d29-96ca-ecd3bd0fefc6 became leader

openstack-operators

nova-operator-controller-manager-74b6b5dc96-vbkgr_9244f50b-2672-4961-b34d-ae1fdd090601

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-74b6b5dc96-vbkgr_9244f50b-2672-4961-b34d-ae1fdd090601 became leader

openstack-operators

kubelet

ironic-operator-controller-manager-686765764-jhdvn

Started

Started container manager

openstack-operators

multus

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

multus

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Created

Created container: manager

openstack-operators

multus

openstack-operator-controller-manager-85db8c7646-pflgk

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:417a4ede6dce5d088ce7dc1ac6e9dab30f3b532bd5b506e2df65d6eaecbc7cb9" in 3.829s (3.829s including waiting). Image size: 192852404 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:417a4ede6dce5d088ce7dc1ac6e9dab30f3b532bd5b506e2df65d6eaecbc7cb9" in 3.829s (3.829s including waiting). Image size: 192852404 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

Pulled

Container image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" already present on machine

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

infra-operator-controller-manager-b8c8d7cc8-bhcdj_41a1f85a-9579-43e9-912b-4c0bf5d03e4f

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-b8c8d7cc8-bhcdj_41a1f85a-9579-43e9-912b-4c0bf5d03e4f became leader

openstack-operators

infra-operator-controller-manager-b8c8d7cc8-bhcdj_41a1f85a-9579-43e9-912b-4c0bf5d03e4f

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-b8c8d7cc8-bhcdj_41a1f85a-9579-43e9-912b-4c0bf5d03e4f became leader

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

Pulled

Container image "38.102.83.110:5001/openstack-k8s-operators/openstack-operator:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" already present on machine

openstack-operators

multus

openstack-operator-controller-manager-85db8c7646-pflgk

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-b8c8d7cc8-bhcdj

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-85db8c7646-pflgk

Created

Created container: manager

openstack-operators

openstack-operator-controller-manager-85db8c7646-pflgk_74988b8f-8793-4eb3-9495-064df400ed15

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-85db8c7646-pflgk_74988b8f-8793-4eb3-9495-064df400ed15 became leader

openstack-operators

openstack-operator-controller-manager-85db8c7646-pflgk_74988b8f-8793-4eb3-9495-064df400ed15

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-85db8c7646-pflgk_74988b8f-8793-4eb3-9495-064df400ed15 became leader

openstack-operators

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk_d7fef430-c117-408d-99b0-e2fb237349fb

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk_d7fef430-c117-408d-99b0-e2fb237349fb became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Created

Created container: manager

openstack-operators

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk_d7fef430-c117-408d-99b0-e2fb237349fb

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk_d7fef430-c117-408d-99b0-e2fb237349fb became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 1.999s (1.999s including waiting). Image size: 190527593 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-7c6767dc9cr22lk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 1.999s (1.999s including waiting). Image size: 190527593 bytes.

openstack

cert-manager-certificates-trigger

rootca-public

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-public

ErrInitIssuer

Error initializing issuer: secrets "rootca-public" not found
(x2)

openstack

cert-manager-issuers

rootca-public

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-public" not found

openstack

cert-manager-certificates-trigger

rootca-internal

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

rootca-public

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-public-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrInitIssuer

Error initializing issuer: secrets "rootca-internal" not found
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-internal" not found

openstack

cert-manager-certificates-key-manager

rootca-public

Generated

Stored new private key in temporary Secret resource "rootca-public-4bdjx"

openstack

cert-manager-certificates-request-manager

rootca-public

Requested

Created new CertificateRequest resource "rootca-public-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rootca-internal

Requested

Created new CertificateRequest resource "rootca-internal-1"

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-internal

Generated

Stored new private key in temporary Secret resource "rootca-internal-g28x4"

openstack

cert-manager-certificaterequests-issuer-acme

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-internal-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

rootca-libvirt

Requested

Created new CertificateRequest resource "rootca-libvirt-1"

openstack

cert-manager-certificates-key-manager

rootca-libvirt

Generated

Stored new private key in temporary Secret resource "rootca-libvirt-6lzh6"

openstack

cert-manager-certificaterequests-approver

rootca-libvirt-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrInitIssuer

Error initializing issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

rootca-libvirt

Issuing

The certificate has been successfully issued
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrInitIssuer

Error initializing issuer: secrets "rootca-libvirt" not found
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificates-trigger

rootca-libvirt

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

rootca-ovn

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

rootca-internal

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

rootca-ovn

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rootca-ovn

Requested

Created new CertificateRequest resource "rootca-ovn-1"

openstack

cert-manager-certificates-key-manager

rootca-ovn

Generated

Stored new private key in temporary Secret resource "rootca-ovn-jft4w"

openstack

cert-manager-certificaterequests-issuer-acme

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x3)

openstack

cert-manager-issuers

rootca-public

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rootca-ovn-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x3)

openstack

cert-manager-issuers

rootca-internal

KeyPairVerified

Signing CA verified

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-5d859fb5df to 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-55994974c5 to 1

openstack

cert-manager-certificates-key-manager

rabbitmq-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-svc-q6hk4"

openstack

cert-manager-certificates-trigger

rabbitmq-svc

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

dnsmasq-dns

IPAllocated

Assigned IP ["192.168.122.80"]

default

endpoint-controller

dnsmasq-dns

FailedToCreateEndpoint

Failed to create endpoint for service openstack/dnsmasq-dns: endpoints "dnsmasq-dns" already exists

openstack

default-scheduler

dnsmasq-dns-55994974c5-pnrh6

Scheduled

Successfully assigned openstack/dnsmasq-dns-55994974c5-pnrh6 to master-0

openstack

replicaset-controller

dnsmasq-dns-55994974c5

SuccessfulCreate

Created pod: dnsmasq-dns-55994974c5-pnrh6

openstack

cert-manager-certificates-key-manager

rabbitmq-cell1-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-hl2tg"

openstack

cert-manager-certificates-trigger

rabbitmq-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

default-scheduler

dnsmasq-dns-5d859fb5df-rqt5m

Scheduled

Successfully assigned openstack/dnsmasq-dns-5d859fb5df-rqt5m to master-0

openstack

replicaset-controller

dnsmasq-dns-5d859fb5df

SuccessfulCreate

Created pod: dnsmasq-dns-5d859fb5df-rqt5m

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rabbitmq-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

dnsmasq-dns-5d859fb5df-rqt5m

AddedInterface

Add eth0 [10.128.0.169/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rabbitmq-cell1-svc

Requested

Created new CertificateRequest resource "rabbitmq-cell1-svc-1"

openstack

cert-manager-certificates-request-manager

rabbitmq-svc

Requested

Created new CertificateRequest resource "rabbitmq-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5d859fb5df-rqt5m

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified"

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-55994974c5-pnrh6

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-55994974c5-pnrh6

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rabbitmq-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

rabbitmq-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

rabbitmq-cell1-svc

Issuing

The certificate has been successfully issued

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-nodes of Type *v1.Service

openstack

metallb-controller

rabbitmq

IPAllocated

Assigned IP ["172.17.0.85"]
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-erlang-cookie of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-default-user of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-plugins-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server-conf of Type *v1.ConfigMap

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.RoleBinding

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-default-user of Type *v1.Secret
(x3)

openstack

cert-manager-issuers

rootca-ovn

KeyPairVerified

Signing CA verified

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1 of Type *v1.Service
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

rabbitmq-cell1

IPAllocated

Assigned IP ["172.17.0.86"]

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-nodes of Type *v1.Service

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-6779d95cff

SuccessfulCreate

Created pod: dnsmasq-dns-6779d95cff-xxcrz

openstack

default-scheduler

dnsmasq-dns-6779d95cff-xxcrz

Scheduled

Successfully assigned openstack/dnsmasq-dns-6779d95cff-xxcrz to master-0

openstack

cert-manager-certificates-trigger

galera-openstack-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

galera-openstack-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-svc-5pqc2"

openstack

cert-manager-certificates-request-manager

galera-openstack-svc

Requested

Created new CertificateRequest resource "galera-openstack-svc-1"

openstack

replicaset-controller

dnsmasq-dns-55994974c5

SuccessfulDelete

Deleted pod: dnsmasq-dns-55994974c5-pnrh6

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-6779d95cff to 1 from 0
(x3)

openstack

cert-manager-issuers

rootca-libvirt

KeyPairVerified

Signing CA verified

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-55994974c5 to 0 from 1

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

persistence-rabbitmq-cell1-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0"
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-5d859fb5df to 0 from 1

openstack

replicaset-controller

dnsmasq-dns-6f75dd7cd9

SuccessfulCreate

Created pod: dnsmasq-dns-6f75dd7cd9-k7sq4

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified"

openstack

multus

dnsmasq-dns-6779d95cff-xxcrz

AddedInterface

Add eth0 [10.128.0.170/23] from ovn-kubernetes

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.RoleBinding

openstack

cert-manager-certificates-trigger

galera-openstack-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success

openstack

replicaset-controller

dnsmasq-dns-5d859fb5df

SuccessfulDelete

Deleted pod: dnsmasq-dns-5d859fb5df-rqt5m

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Pod openstack-galera-0 in StatefulSet openstack-galera successful

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificaterequests-approver

galera-openstack-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

default-scheduler

dnsmasq-dns-6f75dd7cd9-k7sq4

Scheduled

Successfully assigned openstack/dnsmasq-dns-6f75dd7cd9-k7sq4 to master-0

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-6f75dd7cd9 to 1 from 0

openstack

cert-manager-certificates-issuing

galera-openstack-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

galera-openstack-cell1-svc

Requested

Created new CertificateRequest resource "galera-openstack-cell1-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

galera-openstack-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified"

openstack

cert-manager-certificates-key-manager

galera-openstack-cell1-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-mbln9"

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-6f75dd7cd9-k7sq4

AddedInterface

Add eth0 [10.128.0.171/23] from ovn-kubernetes

openstack

cert-manager-certificates-issuing

galera-openstack-cell1-svc

Issuing

The certificate has been successfully issued

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success

openstack

cert-manager-certificates-trigger

memcached-svc

Issuing

Issuing certificate as Secret does not exist

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-key-manager

ovn-metrics

Generated

Stored new private key in temporary Secret resource "ovn-metrics-xvpbs"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovn-metrics-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

statefulset-controller

memcached

SuccessfulCreate

create Pod memcached-0 in StatefulSet memcached successful

openstack

cert-manager-certificates-issuing

memcached-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

memcached-svc

Requested

Created new CertificateRequest resource "memcached-svc-1"

openstack

cert-manager-certificates-key-manager

memcached-svc

Generated

Stored new private key in temporary Secret resource "memcached-svc-bfvfl"

openstack

cert-manager-certificates-trigger

ovn-metrics

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

ovnnorthd-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

memcached-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

memcached-0

Scheduled

Successfully assigned openstack/memcached-0 to master-0

openstack

cert-manager-certificaterequests-issuer-venafi

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovn-metrics

Requested

Created new CertificateRequest resource "ovn-metrics-1"

openstack

cert-manager-certificates-issuing

ovn-metrics

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-trigger

ovncontroller-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ovndbcluster-sb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-slb8v"

openstack

cert-manager-certificates-trigger

ovndbcluster-sb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovndbcluster-sb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

ovncontroller-ovndbs

Generated

Stored new private key in temporary Secret resource "ovncontroller-ovndbs-j45k7"

openstack

cert-manager-certificates-trigger

neutron-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

ovndbcluster-sb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ovnnorthd-ovndbs

Generated

Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-fpgf6"

openstack

cert-manager-certificates-request-manager

ovnnorthd-ovndbs

Requested

Created new CertificateRequest resource "ovnnorthd-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

rabbitmq-cell1-server-0

Scheduled

Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0

openstack

cert-manager-certificaterequests-approver

ovnnorthd-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-trigger

ovndbcluster-nb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

persistence-rabbitmq-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0"

openstack

cert-manager-certificaterequests-issuer-venafi

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovncontroller-ovndbs

Requested

Created new CertificateRequest resource "ovncontroller-ovndbs-1"

openstack

cert-manager-certificates-key-manager

neutron-ovndbs

Generated

Stored new private key in temporary Secret resource "neutron-ovndbs-m8g29"

openstack

cert-manager-certificaterequests-issuer-acme

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

persistence-rabbitmq-cell1-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-a589decc-6872-4a81-90a7-55085fdbb47d

openstack

cert-manager-certificates-issuing

ovndbcluster-sb-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

ovnnorthd-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

neutron-ovndbs

Requested

Created new CertificateRequest resource "neutron-ovndbs-1"

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

persistence-rabbitmq-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-93c6cdb6-c88b-4ed7-afb8-83aa261a592c

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

mysql-db-openstack-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0"

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ovndbcluster-nb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-gs2hh"

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

rabbitmq-server-0

Scheduled

Successfully assigned openstack/rabbitmq-server-0 to master-0

openstack

cert-manager-certificaterequests-approver

ovncontroller-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

mysql-db-openstack-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-ca0b5327-3d55-4f6b-9b59-09ca4533e9e1

openstack

daemonset-controller

ovn-controller

SuccessfulCreate

Created pod: ovn-controller-8nzj6

openstack

default-scheduler

openstack-galera-0

Scheduled

Successfully assigned openstack/openstack-galera-0 to master-0

openstack

daemonset-controller

ovn-controller-ovs

SuccessfulCreate

Created pod: ovn-controller-ovs-2cgqz

openstack

default-scheduler

ovn-controller-ovs-2cgqz

Scheduled

Successfully assigned openstack/ovn-controller-ovs-2cgqz to master-0

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

mysql-db-openstack-cell1-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0"

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

default-scheduler

ovn-controller-8nzj6

Scheduled

Successfully assigned openstack/ovn-controller-8nzj6 to master-0

openstack

cert-manager-certificaterequests-approver

neutron-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

mysql-db-openstack-cell1-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-e4151f20-922d-4c86-a308-b1eab7265e73

openstack

cert-manager-certificates-issuing

ovncontroller-ovndbs

Issuing

The certificate has been successfully issued

openstack

default-scheduler

openstack-cell1-galera-0

Scheduled

Successfully assigned openstack/openstack-cell1-galera-0 to master-0

openstack

cert-manager-certificates-request-manager

ovndbcluster-nb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovndbcluster-nb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success

openstack

cert-manager-certificates-issuing

ovndbcluster-nb-ovndbs

Issuing

The certificate has been successfully issued

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful

openstack

cert-manager-certificates-issuing

neutron-ovndbs

Issuing

The certificate has been successfully issued

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0"

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful
(x2)

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0"

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-a989983c-76c3-46c8-a8a4-81bcbb7ed698

openstack

default-scheduler

ovsdbserver-sb-0

Scheduled

Successfully assigned openstack/ovsdbserver-sb-0 to master-0

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-aac4b125-e02e-425b-a5fe-e4b813eecbc8

openstack

default-scheduler

ovsdbserver-nb-0

Scheduled

Successfully assigned openstack/ovsdbserver-nb-0 to master-0

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Created

Created container: init

openstack

kubelet

dnsmasq-dns-5d859fb5df-rqt5m

Started

Started container init

openstack

kubelet

dnsmasq-dns-55994974c5-pnrh6

Created

Created container: init

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Started

Started container init

openstack

kubelet

dnsmasq-dns-5d859fb5df-rqt5m

Created

Created container: init

openstack

kubelet

dnsmasq-dns-55994974c5-pnrh6

Started

Started container init

openstack

kubelet

rabbitmq-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified"

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 18.476s (18.476s including waiting). Image size: 679322452 bytes.

openstack

kubelet

dnsmasq-dns-55994974c5-pnrh6

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 22.131s (22.131s including waiting). Image size: 679322452 bytes.

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 18.148s (18.148s including waiting). Image size: 679322452 bytes.

openstack

kubelet

dnsmasq-dns-5d859fb5df-rqt5m

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 21.89s (21.89s including waiting). Image size: 679322452 bytes.

openstack

multus

rabbitmq-server-0

AddedInterface

Add eth0 [10.128.0.174/23] from ovn-kubernetes

openstack

multus

rabbitmq-cell1-server-0

AddedInterface

Add eth0 [10.128.0.173/23] from ovn-kubernetes

openstack

kubelet

openstack-cell1-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified"

openstack

multus

ovn-controller-ovs-2cgqz

AddedInterface

Add eth0 [10.128.0.176/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Started

Started container init

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Created

Created container: init

openstack

kubelet

openstack-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified"

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add eth0 [10.128.0.179/23] from ovn-kubernetes

openstack

multus

ovn-controller-ovs-2cgqz

AddedInterface

Add datacentre [] from openstack/datacentre

openstack

multus

ovn-controller-ovs-2cgqz

AddedInterface

Add ironic [172.20.1.30/24] from openstack/ironic

openstack

multus

openstack-cell1-galera-0

AddedInterface

Add eth0 [10.128.0.178/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

memcached-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-memcached:current-podified"

openstack

multus

memcached-0

AddedInterface

Add eth0 [10.128.0.172/23] from ovn-kubernetes

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add eth0 [10.128.0.180/23] from ovn-kubernetes

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add internalapi [172.17.0.30/24] from openstack/internalapi

openstack

kubelet

rabbitmq-cell1-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified"

openstack

multus

ovn-controller-8nzj6

AddedInterface

Add eth0 [10.128.0.175/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-8nzj6

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"

openstack

multus

openstack-galera-0

AddedInterface

Add eth0 [10.128.0.177/23] from ovn-kubernetes

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified"

openstack

kubelet

ovn-controller-ovs-2cgqz

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified"

openstack

multus

ovn-controller-ovs-2cgqz

AddedInterface

Add tenant [172.19.0.30/24] from openstack/tenant

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add internalapi [172.17.0.31/24] from openstack/internalapi

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Started

Started container dnsmasq-dns

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified"

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Created

Created container: dnsmasq-dns

openstack

kubelet

openstack-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" in 9.066s (9.066s including waiting). Image size: 429822276 bytes.

openstack

kubelet

openstack-cell1-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" in 8.993s (8.993s including waiting). Image size: 429822276 bytes.

openstack

kubelet

ovn-controller-ovs-2cgqz

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" in 8.909s (8.909s including waiting). Image size: 324641297 bytes.

openstack

kubelet

ovn-controller-8nzj6

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" in 9.193s (9.193s including waiting). Image size: 347014089 bytes.

openstack

kubelet

rabbitmq-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" in 9.623s (9.623s including waiting). Image size: 304861257 bytes.

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" in 9.186s (9.186s including waiting). Image size: 304861257 bytes.

openstack

kubelet

memcached-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached:current-podified" in 9.433s (9.433s including waiting). Image size: 277800650 bytes.

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" in 8.295s (8.295s including waiting). Image size: 347188517 bytes.

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" in 9.321s (9.321s including waiting). Image size: 347188005 bytes.

openstack

kubelet

openstack-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

openstack-galera-0

Created

Created container: mysql-bootstrap

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-6779d95cff to 0 from 1

openstack

kubelet

rabbitmq-server-0

Started

Started container setup-container

openstack

kubelet

dnsmasq-dns-6779d95cff-xxcrz

Killing

Stopping container dnsmasq-dns

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

openstack-cell1-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

ovn-controller-8nzj6

Started

Started container ovn-controller

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Started

Started container ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"

openstack

replicaset-controller

dnsmasq-dns-6779d95cff

SuccessfulDelete

Deleted pod: dnsmasq-dns-6779d95cff-xxcrz

openstack

kubelet

ovn-controller-ovs-2cgqz

Created

Created container: ovsdb-server-init

openstack

kubelet

ovn-controller-8nzj6

Created

Created container: ovn-controller

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: ovsdbserver-sb

openstack

kubelet

memcached-0

Started

Started container memcached

openstack

kubelet

memcached-0

Created

Created container: memcached

openstack

kubelet

rabbitmq-server-0

Created

Created container: setup-container

openstack

kubelet

ovn-controller-ovs-2cgqz

Started

Started container ovsdb-server-init

openstack

kubelet

ovsdbserver-sb-0

Started

Started container ovsdbserver-sb

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: setup-container

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container setup-container

openstack

kubelet

ovn-controller-ovs-2cgqz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" already present on machine

openstack

kubelet

ovn-controller-ovs-2cgqz

Created

Created container: ovsdb-server

openstack

kubelet

ovn-controller-ovs-2cgqz

Started

Started container ovsdb-server

openstack

kubelet

ovn-controller-ovs-2cgqz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" already present on machine

openstack

kubelet

ovn-controller-ovs-2cgqz

Created

Created container: ovs-vswitchd

openstack

kubelet

ovn-controller-ovs-2cgqz

Started

Started container ovs-vswitchd

openstack

metallb-controller

dnsmasq-dns-ironic

IPAllocated

Assigned IP ["172.20.1.80"]
(x2)

openstack

metallb-controller

dnsmasq-dns-ironic

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

dnsmasq-dns-ironic

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

dnsmasq-dns-ironic

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" in 7.889s (7.889s including waiting). Image size: 165206333 bytes.

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" in 7.748s (7.748s including waiting). Image size: 165206333 bytes.

openstack

kubelet

openstack-cell1-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

openstack-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

ovsdbserver-nb-0

Started

Started container openstack-network-exporter

openstack

persistentvolume-controller

swift-swift-storage-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificaterequests-issuer-venafi

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: openstack-network-exporter

openstack

default-scheduler

dnsmasq-dns-998757459-wdrgr

Scheduled

Successfully assigned openstack/dnsmasq-dns-998757459-wdrgr to master-0

openstack

replicaset-controller

dnsmasq-dns-998757459

SuccessfulCreate

Created pod: dnsmasq-dns-998757459-wdrgr

openstack

kubelet

openstack-galera-0

Started

Started container galera

openstack

kubelet

openstack-galera-0

Created

Created container: galera

openstack

kubelet

ovsdbserver-sb-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: openstack-network-exporter

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-998757459 to 1

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

swift-swift-storage-0

Provisioning

External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0"
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

metallb-controller

swift-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

kubelet

openstack-cell1-galera-0

Started

Started container galera

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: galera

openstack

cert-manager-certificaterequests-issuer-vault

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

swift-swift-storage-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificaterequests-issuer-acme

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Pod swift-storage-0 in StatefulSet swift-storage successful

openstack

cert-manager-certificates-trigger

swift-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

swift-internal-svc

Requested

Created new CertificateRequest resource "swift-internal-svc-1"

openstack

cert-manager-certificates-key-manager

swift-internal-svc

Generated

Stored new private key in temporary Secret resource "swift-internal-svc-j9hgk"

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

swift-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-998757459-wdrgr

Started

Started container init

openstack

cert-manager-certificates-issuing

swift-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-998757459-wdrgr

Created

Created container: init

openstack

cert-manager-certificates-trigger

swift-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

swift-public-svc

Generated

Stored new private key in temporary Secret resource "swift-public-svc-bjgwb"

openstack

cert-manager-certificates-request-manager

swift-public-svc

Requested

Created new CertificateRequest resource "swift-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

swift-swift-storage-0

ProvisioningSucceeded

Successfully provisioned volume pvc-b569bfd0-b2b4-45a5-9a8b-96201e72fc57

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

swift-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-998757459-wdrgr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

multus

dnsmasq-dns-998757459-wdrgr

AddedInterface

Add eth0 [10.128.0.181/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

swift-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-998757459-wdrgr

Started

Started container dnsmasq-dns

openstack

default-scheduler

swift-storage-0

Scheduled

Successfully assigned openstack/swift-storage-0 to master-0

openstack

cert-manager-certificates-request-manager

swift-public-route

Requested

Created new CertificateRequest resource "swift-public-route-1"

openstack

kubelet

dnsmasq-dns-998757459-wdrgr

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-998757459-wdrgr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

cert-manager-certificates-key-manager

swift-public-route

Generated

Stored new private key in temporary Secret resource "swift-public-route-hx5ks"

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

swift-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

swift-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-795f757f69

SuccessfulCreate

Created pod: dnsmasq-dns-795f757f69-cvqcm

openstack

replicaset-controller

dnsmasq-dns-795f757f69

SuccessfulDelete

Deleted pod: dnsmasq-dns-795f757f69-cvqcm

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-998757459 to 0 from 1

openstack

daemonset-controller

ovn-controller-metrics

SuccessfulCreate

Created pod: ovn-controller-metrics-wtqbk

openstack

replicaset-controller

dnsmasq-dns-6b9cd4dcf7

SuccessfulCreate

Created pod: dnsmasq-dns-6b9cd4dcf7-dmhrm

openstack

replicaset-controller

dnsmasq-dns-998757459

SuccessfulDelete

Deleted pod: dnsmasq-dns-998757459-wdrgr

openstack

default-scheduler

ovn-controller-metrics-wtqbk

Scheduled

Successfully assigned openstack/ovn-controller-metrics-wtqbk to master-0

openstack

default-scheduler

ovn-northd-0

Scheduled

Successfully assigned openstack/ovn-northd-0 to master-0

openstack

statefulset-controller

ovn-northd

SuccessfulCreate

create Pod ovn-northd-0 in StatefulSet ovn-northd successful

openstack

cert-manager-certificates-issuing

swift-public-route

Issuing

The certificate has been successfully issued

openstack

default-scheduler

dnsmasq-dns-795f757f69-cvqcm

Scheduled

Successfully assigned openstack/dnsmasq-dns-795f757f69-cvqcm to master-0

openstack

default-scheduler

dnsmasq-dns-6b9cd4dcf7-dmhrm

Scheduled

Successfully assigned openstack/dnsmasq-dns-6b9cd4dcf7-dmhrm to master-0

openstack

kubelet

ovn-controller-metrics-wtqbk

Started

Started container openstack-network-exporter

openstack

multus

dnsmasq-dns-795f757f69-cvqcm

AddedInterface

Add eth0 [10.128.0.183/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-metrics-wtqbk

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" already present on machine

openstack

default-scheduler

swift-ring-rebalance-gl66q

Scheduled

Successfully assigned openstack/swift-ring-rebalance-gl66q to master-0

openstack

kubelet

dnsmasq-dns-6b9cd4dcf7-dmhrm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-6b9cd4dcf7-dmhrm

Created

Created container: init

openstack

kubelet

dnsmasq-dns-795f757f69-cvqcm

Started

Started container init

openstack

kubelet

dnsmasq-dns-6b9cd4dcf7-dmhrm

Started

Started container init

openstack

multus

ovn-northd-0

AddedInterface

Add eth0 [10.128.0.186/23] from ovn-kubernetes

openstack

kubelet

ovn-northd-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified"

openstack

job-controller

swift-ring-rebalance

SuccessfulCreate

Created pod: swift-ring-rebalance-gl66q

openstack

multus

ovn-controller-metrics-wtqbk

AddedInterface

Add eth0 [10.128.0.184/23] from ovn-kubernetes

openstack

multus

dnsmasq-dns-6b9cd4dcf7-dmhrm

AddedInterface

Add eth0 [10.128.0.185/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-795f757f69-cvqcm

Created

Created container: init

openstack

kubelet

dnsmasq-dns-795f757f69-cvqcm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

ovn-controller-metrics-wtqbk

Created

Created container: openstack-network-exporter

openstack

kubelet

dnsmasq-dns-998757459-wdrgr

Killing

Stopping container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6b9cd4dcf7-dmhrm

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6b9cd4dcf7-dmhrm

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6b9cd4dcf7-dmhrm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

multus

swift-ring-rebalance-gl66q

AddedInterface

Add eth0 [10.128.0.187/23] from ovn-kubernetes

openstack

kubelet

ovn-northd-0

Started

Started container openstack-network-exporter

openstack

kubelet

swift-ring-rebalance-gl66q

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified"

openstack

kubelet

ovn-northd-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" in 1.256s (1.256s including waiting). Image size: 347185612 bytes.

openstack

kubelet

ovn-northd-0

Created

Created container: ovn-northd

openstack

kubelet

ovn-northd-0

Started

Started container ovn-northd

openstack

kubelet

ovn-northd-0

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" already present on machine

openstack

kubelet

ovn-northd-0

Created

Created container: openstack-network-exporter
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1 of Type *v1.Service
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-server of Type *v1.StatefulSet
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq of Type *v1.Service
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-server of Type *v1.StatefulSet

openstack

default-scheduler

placement-db-create-bbghv

Scheduled

Successfully assigned openstack/placement-db-create-bbghv to master-0

openstack

job-controller

placement-db-create

SuccessfulCreate

Created pod: placement-db-create-bbghv

openstack

default-scheduler

placement-8d80-account-create-update-dxjzn

Scheduled

Successfully assigned openstack/placement-8d80-account-create-update-dxjzn to master-0

openstack

job-controller

placement-8d80-account-create-update

SuccessfulCreate

Created pod: placement-8d80-account-create-update-dxjzn

openstack

multus

placement-db-create-bbghv

AddedInterface

Add eth0 [10.128.0.189/23] from ovn-kubernetes

openstack

multus

placement-8d80-account-create-update-dxjzn

AddedInterface

Add eth0 [10.128.0.188/23] from ovn-kubernetes

openstack

default-scheduler

glance-db-create-4n48d

Scheduled

Successfully assigned openstack/glance-db-create-4n48d to master-0

openstack

default-scheduler

glance-df7d-account-create-update-pbhhl

Scheduled

Successfully assigned openstack/glance-df7d-account-create-update-pbhhl to master-0

openstack

job-controller

glance-db-create

SuccessfulCreate

Created pod: glance-db-create-4n48d

openstack

job-controller

glance-df7d-account-create-update

SuccessfulCreate

Created pod: glance-df7d-account-create-update-pbhhl

openstack

kubelet

placement-db-create-bbghv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

placement-8d80-account-create-update-dxjzn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

swift-ring-rebalance-gl66q

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" in 5.941s (5.941s including waiting). Image size: 500402961 bytes.

openstack

kubelet

swift-ring-rebalance-gl66q

Started

Started container swift-ring-rebalance

openstack

kubelet

swift-ring-rebalance-gl66q

Created

Created container: swift-ring-rebalance

openstack

kubelet

glance-df7d-account-create-update-pbhhl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

glance-df7d-account-create-update-pbhhl

Created

Created container: mariadb-account-create-update

openstack

multus

glance-df7d-account-create-update-pbhhl

AddedInterface

Add eth0 [10.128.0.191/23] from ovn-kubernetes

openstack

kubelet

placement-8d80-account-create-update-dxjzn

Created

Created container: mariadb-account-create-update

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-bm6p5

openstack

kubelet

glance-db-create-4n48d

Started

Started container mariadb-database-create

openstack

kubelet

placement-8d80-account-create-update-dxjzn

Started

Started container mariadb-account-create-update

openstack

kubelet

placement-db-create-bbghv

Created

Created container: mariadb-database-create

openstack

kubelet

glance-df7d-account-create-update-pbhhl

Started

Started container mariadb-account-create-update

openstack

kubelet

placement-db-create-bbghv

Started

Started container mariadb-database-create

openstack

default-scheduler

root-account-create-update-bm6p5

Scheduled

Successfully assigned openstack/root-account-create-update-bm6p5 to master-0

openstack

multus

glance-db-create-4n48d

AddedInterface

Add eth0 [10.128.0.190/23] from ovn-kubernetes

openstack

kubelet

glance-db-create-4n48d

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

glance-db-create-4n48d

Created

Created container: mariadb-database-create

openstack

kubelet

root-account-create-update-bm6p5

Started

Started container mariadb-account-create-update

openstack

multus

root-account-create-update-bm6p5

AddedInterface

Add eth0 [10.128.0.192/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-bm6p5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

root-account-create-update-bm6p5

Created

Created container: mariadb-account-create-update

openstack

replicaset-controller

dnsmasq-dns-6f75dd7cd9

SuccessfulDelete

Deleted pod: dnsmasq-dns-6f75dd7cd9-k7sq4

openstack

kubelet

dnsmasq-dns-6f75dd7cd9-k7sq4

Killing

Stopping container dnsmasq-dns

openstack

job-controller

glance-db-create

Completed

Job completed

openstack

job-controller

glance-df7d-account-create-update

Completed

Job completed

openstack

job-controller

placement-db-create

Completed

Job completed

openstack

job-controller

placement-8d80-account-create-update

Completed

Job completed

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

job-controller

keystone-4837-account-create-update

SuccessfulCreate

Created pod: keystone-4837-account-create-update-9nldk
(x6)

openstack

kubelet

swift-storage-0

FailedMount

MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found

openstack

default-scheduler

keystone-4837-account-create-update-9nldk

Scheduled

Successfully assigned openstack/keystone-4837-account-create-update-9nldk to master-0

openstack

default-scheduler

keystone-db-create-cl2fb

Scheduled

Successfully assigned openstack/keystone-db-create-cl2fb to master-0

openstack

job-controller

keystone-db-create

SuccessfulCreate

Created pod: keystone-db-create-cl2fb

openstack

kubelet

keystone-db-create-cl2fb

Started

Started container mariadb-database-create

openstack

kubelet

keystone-4837-account-create-update-9nldk

Started

Started container mariadb-account-create-update

openstack

kubelet

keystone-4837-account-create-update-9nldk

Created

Created container: mariadb-account-create-update

openstack

kubelet

keystone-4837-account-create-update-9nldk

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

keystone-db-create-cl2fb

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-db-create-cl2fb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

keystone-db-create-cl2fb

AddedInterface

Add eth0 [10.128.0.193/23] from ovn-kubernetes

openstack

multus

keystone-4837-account-create-update-9nldk

AddedInterface

Add eth0 [10.128.0.194/23] from ovn-kubernetes

openstack

job-controller

glance-db-sync

SuccessfulCreate

Created pod: glance-db-sync-wtgz7

openstack

default-scheduler

glance-db-sync-wtgz7

Scheduled

Successfully assigned openstack/glance-db-sync-wtgz7 to master-0

openstack

multus

glance-db-sync-wtgz7

AddedInterface

Add eth0 [10.128.0.195/23] from ovn-kubernetes

openstack

kubelet

glance-db-sync-wtgz7

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified"

openstack

multus

glance-db-sync-wtgz7

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

job-controller

keystone-db-create

Completed

Job completed

openstack

job-controller

swift-ring-rebalance

Completed

Job completed

openstack

job-controller

keystone-4837-account-create-update

Completed

Job completed

openstack

kubelet

rabbitmq-server-0

Started

Started container rabbitmq

openstack

kubelet

rabbitmq-server-0

Created

Created container: rabbitmq

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" already present on machine

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: rabbitmq

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container rabbitmq

openstack

kubelet

rabbitmq-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" already present on machine

openstack

default-scheduler

root-account-create-update-468nq

Scheduled

Successfully assigned openstack/root-account-create-update-468nq to master-0

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-468nq

openstack

job-controller

ovn-controller-8nzj6-config

SuccessfulCreate

Created pod: ovn-controller-8nzj6-config-kblpr
(x3)

openstack

kubelet

ovn-controller-8nzj6

Unhealthy

Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status

openstack

rabbitmq-server-0/rabbitmq_peer_discovery

pod/rabbitmq-server-0

Created

Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered

openstack

rabbitmq-cell1-server-0/rabbitmq_peer_discovery

pod/rabbitmq-cell1-server-0

Created

Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered

openstack

kubelet

glance-db-sync-wtgz7

Started

Started container glance-db-sync

openstack

kubelet

root-account-create-update-468nq

Started

Started container mariadb-account-create-update

openstack

kubelet

root-account-create-update-468nq

Created

Created container: mariadb-account-create-update

openstack

kubelet

root-account-create-update-468nq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

root-account-create-update-468nq

AddedInterface

Add eth0 [10.128.0.196/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-8nzj6-config-kblpr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" already present on machine

openstack

multus

swift-storage-0

AddedInterface

Add eth0 [10.128.0.182/23] from ovn-kubernetes

openstack

multus

ovn-controller-8nzj6-config-kblpr

AddedInterface

Add eth0 [10.128.0.197/23] from ovn-kubernetes

openstack

kubelet

glance-db-sync-wtgz7

Created

Created container: glance-db-sync

openstack

kubelet

glance-db-sync-wtgz7

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" in 13.017s (13.017s including waiting). Image size: 983190896 bytes.

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified"

openstack

kubelet

ovn-controller-8nzj6-config-kblpr

Started

Started container ovn-config

openstack

kubelet

ovn-controller-8nzj6-config-kblpr

Created

Created container: ovn-config

openstack

kubelet

swift-storage-0

Started

Started container account-server

openstack

kubelet

swift-storage-0

Started

Started container account-replicator

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" in 1.289s (1.289s including waiting). Image size: 445346822 bytes.

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: account-auditor

openstack

kubelet

swift-storage-0

Started

Started container account-auditor

openstack

kubelet

swift-storage-0

Created

Created container: account-server

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: account-replicator

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" already present on machine

openstack

kubelet

swift-storage-0

Started

Started container account-reaper

openstack

kubelet

swift-storage-0

Created

Created container: account-reaper

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container:current-podified"

openstack

kubelet

swift-storage-0

Created

Created container: container-replicator

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-container:current-podified" already present on machine

openstack

kubelet

swift-storage-0

Started

Started container container-server

openstack

kubelet

swift-storage-0

Created

Created container: container-server

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container:current-podified" in 1.073s (1.073s including waiting). Image size: 445362696 bytes.

openstack

job-controller

ovn-controller-8nzj6-config

Completed

Job completed

openstack

metallb-speaker

rabbitmq-cell1

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

metallb-speaker

rabbitmq

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

cinder-db-create

SuccessfulCreate

Created pod: cinder-db-create-h7pn9

openstack

default-scheduler

cinder-f0a6-account-create-update-7g79m

Scheduled

Successfully assigned openstack/cinder-f0a6-account-create-update-7g79m to master-0

openstack

default-scheduler

dnsmasq-dns-dd6667767-7bv69

Scheduled

Successfully assigned openstack/dnsmasq-dns-dd6667767-7bv69 to master-0

openstack

replicaset-controller

dnsmasq-dns-dd6667767

SuccessfulCreate

Created pod: dnsmasq-dns-dd6667767-7bv69

openstack

default-scheduler

cinder-db-create-h7pn9

Scheduled

Successfully assigned openstack/cinder-db-create-h7pn9 to master-0

openstack

job-controller

cinder-f0a6-account-create-update

SuccessfulCreate

Created pod: cinder-f0a6-account-create-update-7g79m

openstack

kubelet

dnsmasq-dns-dd6667767-7bv69

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

job-controller

keystone-db-sync

SuccessfulCreate

Created pod: keystone-db-sync-mz8x8

openstack

job-controller

neutron-db-create

SuccessfulCreate

Created pod: neutron-db-create-4vctw

openstack

kubelet

cinder-f0a6-account-create-update-7g79m

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

cinder-f0a6-account-create-update-7g79m

AddedInterface

Add eth0 [10.128.0.200/23] from ovn-kubernetes

openstack

kubelet

cinder-db-create-h7pn9

Started

Started container mariadb-database-create

openstack

multus

neutron-db-create-4vctw

AddedInterface

Add eth0 [10.128.0.201/23] from ovn-kubernetes

openstack

default-scheduler

neutron-db-create-4vctw

Scheduled

Successfully assigned openstack/neutron-db-create-4vctw to master-0

openstack

job-controller

neutron-ab9f-account-create-update

SuccessfulCreate

Created pod: neutron-ab9f-account-create-update-wxwc6

openstack

kubelet

cinder-db-create-h7pn9

Created

Created container: mariadb-database-create

openstack

kubelet

cinder-db-create-h7pn9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

cinder-db-create-h7pn9

AddedInterface

Add eth0 [10.128.0.199/23] from ovn-kubernetes

openstack

default-scheduler

keystone-db-sync-mz8x8

Scheduled

Successfully assigned openstack/keystone-db-sync-mz8x8 to master-0

openstack

default-scheduler

neutron-ab9f-account-create-update-wxwc6

Scheduled

Successfully assigned openstack/neutron-ab9f-account-create-update-wxwc6 to master-0

openstack

multus

dnsmasq-dns-dd6667767-7bv69

AddedInterface

Add eth0 [10.128.0.198/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-dd6667767-7bv69

Created

Created container: init

openstack

kubelet

dnsmasq-dns-dd6667767-7bv69

Started

Started container init

openstack

kubelet

neutron-ab9f-account-create-update-wxwc6

Created

Created container: mariadb-account-create-update

openstack

kubelet

neutron-db-create-4vctw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-dd6667767-7bv69

Started

Started container dnsmasq-dns

openstack

kubelet

cinder-f0a6-account-create-update-7g79m

Started

Started container mariadb-account-create-update

openstack

kubelet

dnsmasq-dns-dd6667767-7bv69

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

keystone-db-sync-mz8x8

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified"

openstack

multus

keystone-db-sync-mz8x8

AddedInterface

Add eth0 [10.128.0.202/23] from ovn-kubernetes

openstack

multus

neutron-ab9f-account-create-update-wxwc6

AddedInterface

Add eth0 [10.128.0.203/23] from ovn-kubernetes

openstack

kubelet

neutron-ab9f-account-create-update-wxwc6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

cinder-f0a6-account-create-update-7g79m

Created

Created container: mariadb-account-create-update

openstack

kubelet

neutron-ab9f-account-create-update-wxwc6

Started

Started container mariadb-account-create-update

openstack

kubelet

dnsmasq-dns-dd6667767-7bv69

Created

Created container: dnsmasq-dns

openstack

kubelet

neutron-db-create-4vctw

Created

Created container: mariadb-database-create

openstack

kubelet

neutron-db-create-4vctw

Started

Started container mariadb-database-create

openstack

job-controller

glance-db-sync

Completed

Job completed

openstack

replicaset-controller

dnsmasq-dns-dd6667767

SuccessfulDelete

Deleted pod: dnsmasq-dns-dd6667767-7bv69

openstack

replicaset-controller

dnsmasq-dns-89fcc4dcf

SuccessfulCreate

Created pod: dnsmasq-dns-89fcc4dcf-gml6g

openstack

default-scheduler

dnsmasq-dns-89fcc4dcf-gml6g

Scheduled

Successfully assigned openstack/dnsmasq-dns-89fcc4dcf-gml6g to master-0
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

glance-default-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-trigger

glance-default-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-89fcc4dcf-gml6g

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

cert-manager-certificates-request-manager

glance-default-internal-svc

Requested

Created new CertificateRequest resource "glance-default-internal-svc-1"

openstack

multus

dnsmasq-dns-89fcc4dcf-gml6g

AddedInterface

Add eth0 [10.128.0.204/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

glance-default-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

glance-default-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

glance-default-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-key-manager

glance-default-internal-svc

Generated

Stored new private key in temporary Secret resource "glance-default-internal-svc-r2fj2"

openstack

cert-manager-certificates-request-manager

glance-default-public-svc

Requested

Created new CertificateRequest resource "glance-default-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

neutron-ab9f-account-create-update

Completed

Job completed

openstack

cert-manager-certificates-key-manager

glance-default-public-svc

Generated

Stored new private key in temporary Secret resource "glance-default-public-svc-btcts"

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

glance-default-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

glance-default-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-dd6667767-7bv69

Killing

Stopping container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-89fcc4dcf-gml6g

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-89fcc4dcf-gml6g

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-trigger

glance-default-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-89fcc4dcf-gml6g

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-89fcc4dcf-gml6g

Started

Started container init

openstack

kubelet

dnsmasq-dns-89fcc4dcf-gml6g

Created

Created container: init

openstack

job-controller

cinder-f0a6-account-create-update

Completed

Job completed

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

glance-default-public-route

Generated

Stored new private key in temporary Secret resource "glance-default-public-route-xfdzn"

openstack

job-controller

cinder-db-create

Completed

Job completed

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

glance-default-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

glance-default-public-route

Requested

Created new CertificateRequest resource "glance-default-public-route-1"

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

glance-default-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

neutron-db-create

Completed

Job completed

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

keystone-db-sync-mz8x8

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" in 6.405s (6.405s including waiting). Image size: 520351243 bytes.

openstack

kubelet

keystone-db-sync-mz8x8

Started

Started container keystone-db-sync

openstack

kubelet

keystone-db-sync-mz8x8

Created

Created container: keystone-db-sync

openstack

replicaset-controller

dnsmasq-dns-6b9cd4dcf7

SuccessfulDelete

Deleted pod: dnsmasq-dns-6b9cd4dcf7-dmhrm

openstack

kubelet

dnsmasq-dns-6b9cd4dcf7-dmhrm

Killing

Stopping container dnsmasq-dns

openstack

job-controller

keystone-db-sync

Completed

Job completed
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

default-scheduler

dnsmasq-dns-5584778f8f-xqg9r

Scheduled

Successfully assigned openstack/dnsmasq-dns-5584778f8f-xqg9r to master-0

openstack

metallb-controller

keystone-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-4czgd

openstack

replicaset-controller

dnsmasq-dns-5584778f8f

SuccessfulCreate

Created pod: dnsmasq-dns-5584778f8f-xqg9r

openstack

default-scheduler

keystone-bootstrap-4czgd

Scheduled

Successfully assigned openstack/keystone-bootstrap-4czgd to master-0

openstack

statefulset-controller

glance-1280f-default-external-api

SuccessfulCreate

create Claim glance-glance-1280f-default-external-api-0 Pod glance-1280f-default-external-api-0 in StatefulSet glance-1280f-default-external-api success

openstack

default-scheduler

ironic-db-create-9pxq8

Scheduled

Successfully assigned openstack/ironic-db-create-9pxq8 to master-0

openstack

job-controller

cinder-675ba-db-sync

SuccessfulCreate

Created pod: cinder-675ba-db-sync-8zxxl

openstack

job-controller

ironic-db-create

SuccessfulCreate

Created pod: ironic-db-create-9pxq8

openstack

persistentvolume-controller

glance-glance-1280f-default-external-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

default-scheduler

neutron-db-sync-bxnnn

Scheduled

Successfully assigned openstack/neutron-db-sync-bxnnn to master-0
(x2)

openstack

persistentvolume-controller

glance-glance-1280f-default-external-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

persistentvolume-controller

glance-glance-1280f-default-internal-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

persistentvolume-controller

glance-glance-1280f-default-internal-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

metallb-controller

placement-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

default-scheduler

ironic-39c2-account-create-update-jbfp7

Scheduled

Successfully assigned openstack/ironic-39c2-account-create-update-jbfp7 to master-0

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Created

Created container: init

openstack

multus

dnsmasq-dns-5584778f8f-xqg9r

AddedInterface

Add eth0 [10.128.0.205/23] from ovn-kubernetes

openstack

job-controller

neutron-db-sync

SuccessfulCreate

Created pod: neutron-db-sync-bxnnn

openstack

default-scheduler

cinder-675ba-db-sync-8zxxl

Scheduled

Successfully assigned openstack/cinder-675ba-db-sync-8zxxl to master-0

openstack

job-controller

ironic-39c2-account-create-update

SuccessfulCreate

Created pod: ironic-39c2-account-create-update-jbfp7

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

glance-glance-1280f-default-external-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-1280f-default-external-api-0"

openstack

statefulset-controller

glance-1280f-default-internal-api

SuccessfulCreate

create Claim glance-glance-1280f-default-internal-api-0 Pod glance-1280f-default-internal-api-0 in StatefulSet glance-1280f-default-internal-api success

openstack

multus

keystone-bootstrap-4czgd

AddedInterface

Add eth0 [10.128.0.206/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

default

endpoint-controller

placement-internal

FailedToCreateEndpoint

Failed to create endpoint for service openstack/placement-internal: endpoints "placement-internal" already exists

openstack

kubelet

keystone-bootstrap-4czgd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" already present on machine

openstack

kubelet

keystone-bootstrap-4czgd

Created

Created container: keystone-bootstrap

openstack

kubelet

keystone-bootstrap-4czgd

Started

Started container keystone-bootstrap

openstack

multus

neutron-db-sync-bxnnn

AddedInterface

Add eth0 [10.128.0.209/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Started

Started container init

openstack

job-controller

placement-db-sync

SuccessfulCreate

Created pod: placement-db-sync-8rrvj

openstack

multus

cinder-675ba-db-sync-8zxxl

AddedInterface

Add eth0 [10.128.0.208/23] from ovn-kubernetes

openstack

multus

ironic-39c2-account-create-update-jbfp7

AddedInterface

Add eth0 [10.128.0.210/23] from ovn-kubernetes

openstack

multus

ironic-db-create-9pxq8

AddedInterface

Add eth0 [10.128.0.207/23] from ovn-kubernetes

openstack

kubelet

neutron-db-sync-bxnnn

Started

Started container neutron-db-sync

openstack

cert-manager-certificates-key-manager

keystone-internal-svc

Generated

Stored new private key in temporary Secret resource "keystone-internal-svc-btxzl"
(x2)

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

placement-db-sync-8rrvj

Scheduled

Successfully assigned openstack/placement-db-sync-8rrvj to master-0

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

dnsmasq-dns-846459fb55-9x6r8

Scheduled

Successfully assigned openstack/dnsmasq-dns-846459fb55-9x6r8 to master-0

openstack

kubelet

neutron-db-sync-bxnnn

Created

Created container: neutron-db-sync

openstack

kubelet

ironic-db-create-9pxq8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

ironic-db-create-9pxq8

Created

Created container: mariadb-database-create

openstack

kubelet

ironic-db-create-9pxq8

Started

Started container mariadb-database-create

openstack

kubelet

neutron-db-sync-bxnnn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

cert-manager-certificaterequests-issuer-vault

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-39c2-account-create-update-jbfp7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

cert-manager-certificates-trigger

keystone-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

keystone-internal-svc

Requested

Created new CertificateRequest resource "keystone-internal-svc-1"

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Failed

Error: container create failed: mount `/var/lib/kubelet/pods/60432080-f735-4274-970e-58d2fa71550f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory

openstack

replicaset-controller

dnsmasq-dns-5584778f8f

SuccessfulDelete

Deleted pod: dnsmasq-dns-5584778f8f-xqg9r

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Created

Created container: dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-846459fb55

SuccessfulCreate

Created pod: dnsmasq-dns-846459fb55-9x6r8

openstack

kubelet

cinder-675ba-db-sync-8zxxl

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified"

openstack

kubelet

dnsmasq-dns-5584778f8f-xqg9r

Killing

Stopping container dnsmasq-dns

openstack

cert-manager-certificates-issuing

keystone-internal-svc

Issuing

The certificate has been successfully issued

openstack

default-scheduler

glance-1280f-default-external-api-0

Scheduled

Successfully assigned openstack/glance-1280f-default-external-api-0 to master-0

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

glance-glance-1280f-default-internal-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-1280f-default-internal-api-0"

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

keystone-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

ironic-39c2-account-create-update-jbfp7

Created

Created container: mariadb-account-create-update

openstack

cert-manager-certificaterequests-approver

keystone-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

placement-db-sync-8rrvj

AddedInterface

Add eth0 [10.128.0.211/23] from ovn-kubernetes

openstack

kubelet

placement-db-sync-8rrvj

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified"

openstack

kubelet

ironic-39c2-account-create-update-jbfp7

Started

Started container mariadb-account-create-update

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

glance-glance-1280f-default-external-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-70c9925e-bbc2-47ea-836c-8b4fadf77223

openstack

cert-manager-certificaterequests-approver

keystone-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-846459fb55-9x6r8

AddedInterface

Add eth0 [10.128.0.212/23] from ovn-kubernetes

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

glance-glance-1280f-default-internal-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-7512aa1f-2488-47af-b61f-945377082816

openstack

cert-manager-certificates-trigger

keystone-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-846459fb55-9x6r8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-846459fb55-9x6r8

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

glance-1280f-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-1280f-default-internal-api-0 to master-0

openstack

cert-manager-certificates-issuing

keystone-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

keystone-public-svc

Generated

Stored new private key in temporary Secret resource "keystone-public-svc-298n7"

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

keystone-public-svc

Requested

Created new CertificateRequest resource "keystone-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

keystone-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-846459fb55-9x6r8

Started

Started container init

openstack

cert-manager-certificaterequests-approver

keystone-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

keystone-public-route

Generated

Stored new private key in temporary Secret resource "keystone-public-route-xcdbs"

openstack

cert-manager-certificates-request-manager

keystone-public-route

Requested

Created new CertificateRequest resource "keystone-public-route-1"

openstack

cert-manager-certificates-trigger

placement-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

placement-internal-svc

Generated

Stored new private key in temporary Secret resource "placement-internal-svc-fctg9"

openstack

cert-manager-certificates-trigger

placement-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

placement-internal-svc

Requested

Created new CertificateRequest resource "placement-internal-svc-1"

openstack

kubelet

dnsmasq-dns-846459fb55-9x6r8

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-issuing

placement-internal-svc

Issuing

The certificate has been successfully issued

openstack

job-controller

ironic-db-create

Completed

Job completed

openstack

kubelet

dnsmasq-dns-846459fb55-9x6r8

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-846459fb55-9x6r8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

multus

glance-1280f-default-external-api-0

AddedInterface

Add eth0 [10.128.0.213/23] from ovn-kubernetes

openstack

multus

glance-1280f-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

cert-manager-certificates-request-manager

placement-public-svc

Requested

Created new CertificateRequest resource "placement-public-svc-1"

openstack

cert-manager-certificates-trigger

placement-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-approver

placement-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

placement-public-svc

Generated

Stored new private key in temporary Secret resource "placement-public-svc-mr74s"

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

placement-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

glance-1280f-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

placement-public-route

Generated

Stored new private key in temporary Secret resource "placement-public-route-gxdkg"

openstack

kubelet

glance-1280f-default-external-api-0

Created

Created container: glance-log

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

placement-public-route

Requested

Created new CertificateRequest resource "placement-public-route-1"

openstack

kubelet

glance-1280f-default-external-api-0

Started

Started container glance-log

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

ironic-39c2-account-create-update

Completed

Job completed

openstack

cert-manager-certificates-issuing

placement-public-route

Issuing

The certificate has been successfully issued

openstack

default-scheduler

glance-1280f-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-1280f-default-internal-api-0 to master-0

openstack

kubelet

placement-db-sync-8rrvj

Created

Created container: placement-db-sync

openstack

kubelet

glance-1280f-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

multus

glance-1280f-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

multus

glance-1280f-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.215/23] from ovn-kubernetes

openstack

kubelet

placement-db-sync-8rrvj

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" in 6.679s (6.679s including waiting). Image size: 472931542 bytes.

openstack

kubelet

placement-db-sync-8rrvj

Started

Started container placement-db-sync

openstack

job-controller

ironic-db-sync

SuccessfulCreate

Created pod: ironic-db-sync-hxms8

openstack

default-scheduler

ironic-db-sync-hxms8

Scheduled

Successfully assigned openstack/ironic-db-sync-hxms8 to master-0

openstack

kubelet

glance-1280f-default-external-api-0

Started

Started container glance-httpd

openstack

kubelet

glance-1280f-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-1280f-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

multus

ironic-db-sync-hxms8

AddedInterface

Add eth0 [10.128.0.216/23] from ovn-kubernetes

openstack

kubelet

glance-1280f-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

glance-1280f-default-external-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-1280f-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

kubelet

glance-1280f-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

glance-1280f-default-internal-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-1280f-default-external-api-0

Killing

Stopping container glance-log

openstack

kubelet

glance-1280f-default-internal-api-0

Started

Started container glance-httpd

openstack

kubelet

ironic-db-sync-hxms8

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified"

openstack

replicaset-controller

dnsmasq-dns-89fcc4dcf

SuccessfulDelete

Deleted pod: dnsmasq-dns-89fcc4dcf-gml6g

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

default-scheduler

glance-1280f-default-external-api-0

Scheduled

Successfully assigned openstack/glance-1280f-default-external-api-0 to master-0

openstack

kubelet

dnsmasq-dns-89fcc4dcf-gml6g

Killing

Stopping container dnsmasq-dns

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-tfddb

openstack

default-scheduler

keystone-bootstrap-tfddb

Scheduled

Successfully assigned openstack/keystone-bootstrap-tfddb to master-0
(x25)

openstack

metallb-speaker

dnsmasq-dns

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

multus

keystone-bootstrap-tfddb

AddedInterface

Add eth0 [10.128.0.217/23] from ovn-kubernetes

openstack

kubelet

keystone-bootstrap-tfddb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" already present on machine

openstack

kubelet

keystone-bootstrap-tfddb

Created

Created container: keystone-bootstrap

openstack

kubelet

keystone-bootstrap-tfddb

Started

Started container keystone-bootstrap

openstack

multus

glance-1280f-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

multus

glance-1280f-default-external-api-0

AddedInterface

Add eth0 [10.128.0.218/23] from ovn-kubernetes

openstack

default-scheduler

keystone-77c9977ddd-2q2jp

Scheduled

Successfully assigned openstack/keystone-77c9977ddd-2q2jp to master-0

openstack

kubelet

ironic-db-sync-hxms8

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" in 18.632s (18.632s including waiting). Image size: 599253577 bytes.

openstack

job-controller

placement-db-sync

Completed

Job completed

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

deployment-controller

keystone

ScalingReplicaSet

Scaled up replica set keystone-77c9977ddd to 1

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-76cc655964 to 1

openstack

kubelet

glance-1280f-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

replicaset-controller

keystone-77c9977ddd

SuccessfulCreate

Created pod: keystone-77c9977ddd-2q2jp

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-6fd7c7bb8d to 1

openstack

kubelet

ironic-db-sync-hxms8

Created

Created container: init

openstack

kubelet

glance-1280f-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

kubelet

glance-1280f-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-1280f-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

ironic-db-sync-hxms8

Started

Started container init

openstack

default-scheduler

placement-6fd7c7bb8d-6cc8x

Scheduled

Successfully assigned openstack/placement-6fd7c7bb8d-6cc8x to master-0

openstack

replicaset-controller

placement-76cc655964

SuccessfulCreate

Created pod: placement-76cc655964-lxxvl

openstack

default-scheduler

placement-76cc655964-lxxvl

Scheduled

Successfully assigned openstack/placement-76cc655964-lxxvl to master-0

openstack

replicaset-controller

placement-6fd7c7bb8d

SuccessfulCreate

Created pod: placement-6fd7c7bb8d-6cc8x

openstack

kubelet

cinder-675ba-db-sync-8zxxl

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" in 28.899s (28.899s including waiting). Image size: 1161387303 bytes.

openstack

kubelet

ironic-db-sync-hxms8

Created

Created container: ironic-db-sync

openstack

kubelet

keystone-77c9977ddd-2q2jp

Created

Created container: keystone-api

openstack

kubelet

ironic-db-sync-hxms8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" already present on machine

openstack

kubelet

placement-76cc655964-lxxvl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine

openstack

kubelet

placement-6fd7c7bb8d-6cc8x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine

openstack

multus

placement-6fd7c7bb8d-6cc8x

AddedInterface

Add eth0 [10.128.0.221/23] from ovn-kubernetes

openstack

kubelet

ironic-db-sync-hxms8

Started

Started container ironic-db-sync

openstack

kubelet

keystone-77c9977ddd-2q2jp

Started

Started container keystone-api

openstack

kubelet

placement-76cc655964-lxxvl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine

openstack

multus

placement-76cc655964-lxxvl

AddedInterface

Add eth0 [10.128.0.220/23] from ovn-kubernetes

openstack

kubelet

cinder-675ba-db-sync-8zxxl

Started

Started container cinder-675ba-db-sync

openstack

kubelet

glance-1280f-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-1280f-default-external-api-0

Started

Started container glance-httpd

openstack

kubelet

keystone-77c9977ddd-2q2jp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" already present on machine

openstack

multus

keystone-77c9977ddd-2q2jp

AddedInterface

Add eth0 [10.128.0.219/23] from ovn-kubernetes

openstack

kubelet

placement-76cc655964-lxxvl

Started

Started container placement-log

openstack

kubelet

placement-76cc655964-lxxvl

Created

Created container: placement-log

openstack

kubelet

cinder-675ba-db-sync-8zxxl

Created

Created container: cinder-675ba-db-sync

openstack

kubelet

placement-6fd7c7bb8d-6cc8x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine

openstack

kubelet

placement-6fd7c7bb8d-6cc8x

Created

Created container: placement-log

openstack

kubelet

placement-76cc655964-lxxvl

Created

Created container: placement-api

openstack

kubelet

placement-76cc655964-lxxvl

Started

Started container placement-api

openstack

kubelet

placement-6fd7c7bb8d-6cc8x

Started

Started container placement-api

openstack

kubelet

placement-6fd7c7bb8d-6cc8x

Created

Created container: placement-api

openstack

kubelet

placement-6fd7c7bb8d-6cc8x

Started

Started container placement-log
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-78756bd8 to 1
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

replicaset-controller

dnsmasq-dns-6456885d89

SuccessfulCreate

Created pod: dnsmasq-dns-6456885d89-8nk8d

openstack

default-scheduler

dnsmasq-dns-6456885d89-8nk8d

Scheduled

Successfully assigned openstack/dnsmasq-dns-6456885d89-8nk8d to master-0

openstack

metallb-controller

neutron-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

replicaset-controller

neutron-78756bd8

SuccessfulCreate

Created pod: neutron-78756bd8-c6jzz

openstack

default-scheduler

neutron-78756bd8-c6jzz

Scheduled

Successfully assigned openstack/neutron-78756bd8-c6jzz to master-0

openstack

job-controller

neutron-db-sync

Completed

Job completed

openstack

cert-manager-certificaterequests-issuer-acme

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

neutron-internal-svc

Requested

Created new CertificateRequest resource "neutron-internal-svc-1"

openstack

cert-manager-certificates-issuing

neutron-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-key-manager

neutron-internal-svc

Generated

Stored new private key in temporary Secret resource "neutron-internal-svc-qcwqg"

openstack

multus

neutron-78756bd8-c6jzz

AddedInterface

Add eth0 [10.128.0.223/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

neutron-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

neutron-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

neutron-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-6456885d89-8nk8d

AddedInterface

Add eth0 [10.128.0.222/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6456885d89-8nk8d

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-6456885d89-8nk8d

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6456885d89-8nk8d

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-6456885d89-8nk8d

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6456885d89-8nk8d

Created

Created container: init

openstack

kubelet

neutron-78756bd8-c6jzz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

cert-manager-certificates-issuing

neutron-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

neutron-78756bd8-c6jzz

AddedInterface

Add internalapi [172.17.0.32/24] from openstack/internalapi

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-78756bd8-c6jzz

Started

Started container neutron-api
(x25)

openstack

metallb-speaker

dnsmasq-dns-ironic

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

dnsmasq-dns-6456885d89-8nk8d

Created

Created container: dnsmasq-dns

openstack

kubelet

neutron-78756bd8-c6jzz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

neutron-78756bd8-c6jzz

Created

Created container: neutron-httpd

openstack

cert-manager-certificaterequests-approver

neutron-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

neutron-public-svc

Generated

Stored new private key in temporary Secret resource "neutron-public-svc-k89xv"

openstack

cert-manager-certificates-trigger

neutron-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

neutron-public-svc

Requested

Created new CertificateRequest resource "neutron-public-svc-1"

openstack

kubelet

neutron-78756bd8-c6jzz

Started

Started container neutron-httpd

openstack

kubelet

neutron-78756bd8-c6jzz

Created

Created container: neutron-api

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

neutron-public-route

Requested

Created new CertificateRequest resource "neutron-public-route-1"

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

neutron-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

neutron-public-route

Generated

Stored new private key in temporary Secret resource "neutron-public-route-whg4s"

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

default-scheduler

neutron-79bd95bbf9-vglm6

Scheduled

Successfully assigned openstack/neutron-79bd95bbf9-vglm6 to master-0

openstack

replicaset-controller

neutron-79bd95bbf9

SuccessfulCreate

Created pod: neutron-79bd95bbf9-vglm6

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-79bd95bbf9 to 1

openstack

kubelet

neutron-79bd95bbf9-vglm6

Started

Started container neutron-api

openstack

multus

neutron-79bd95bbf9-vglm6

AddedInterface

Add eth0 [10.128.0.224/23] from ovn-kubernetes

openstack

kubelet

neutron-79bd95bbf9-vglm6

Started

Started container neutron-httpd

openstack

multus

neutron-79bd95bbf9-vglm6

AddedInterface

Add internalapi [172.17.0.33/24] from openstack/internalapi

openstack

kubelet

neutron-79bd95bbf9-vglm6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

neutron-79bd95bbf9-vglm6

Created

Created container: neutron-httpd

openstack

kubelet

neutron-79bd95bbf9-vglm6

Created

Created container: neutron-api

openstack

kubelet

neutron-79bd95bbf9-vglm6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

default-scheduler

cinder-675ba-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-675ba-volume-lvm-iscsi-0 to master-0
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

job-controller

cinder-675ba-db-sync

Completed

Job completed
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

default-scheduler

cinder-675ba-scheduler-0

Scheduled

Successfully assigned openstack/cinder-675ba-scheduler-0 to master-0
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

metallb-controller

cinder-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-trigger

cinder-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

multus

cinder-675ba-scheduler-0

AddedInterface

Add eth0 [10.128.0.225/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-6456885d89

SuccessfulDelete

Deleted pod: dnsmasq-dns-6456885d89-8nk8d

openstack

kubelet

dnsmasq-dns-6456885d89-8nk8d

Killing

Stopping container dnsmasq-dns

openstack

default-scheduler

cinder-675ba-backup-0

Scheduled

Successfully assigned openstack/cinder-675ba-backup-0 to master-0

openstack

replicaset-controller

dnsmasq-dns-78fdb4cf6c

SuccessfulCreate

Created pod: dnsmasq-dns-78fdb4cf6c-nxlpt

openstack

default-scheduler

dnsmasq-dns-78fdb4cf6c-nxlpt

Scheduled

Successfully assigned openstack/dnsmasq-dns-78fdb4cf6c-nxlpt to master-0

openstack

default-scheduler

cinder-675ba-api-0

Scheduled

Successfully assigned openstack/cinder-675ba-api-0 to master-0

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified"

openstack

cert-manager-certificates-issuing

cinder-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-internal-svc

Requested

Created new CertificateRequest resource "cinder-internal-svc-1"

openstack

cert-manager-certificates-key-manager

cinder-internal-svc

Generated

Stored new private key in temporary Secret resource "cinder-internal-svc-4lx6j"

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

cinder-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

cinder-675ba-api-0

AddedInterface

Add eth0 [10.128.0.229/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-78fdb4cf6c-nxlpt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

multus

dnsmasq-dns-78fdb4cf6c-nxlpt

AddedInterface

Add eth0 [10.128.0.228/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

cinder-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

cinder-675ba-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine

openstack

kubelet

cinder-675ba-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified"

openstack

kubelet

cinder-675ba-backup-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified"

openstack

multus

cinder-675ba-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

multus

cinder-675ba-backup-0

AddedInterface

Add eth0 [10.128.0.227/23] from ovn-kubernetes

openstack

multus

cinder-675ba-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.226/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

cinder-675ba-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" in 947ms (947ms including waiting). Image size: 1083250334 bytes.

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

cinder-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

cinder-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

cinder-public-route

Generated

Stored new private key in temporary Secret resource "cinder-public-route-tsfz4"

openstack

cert-manager-certificates-request-manager

cinder-public-route

Requested

Created new CertificateRequest resource "cinder-public-route-1"

openstack

cert-manager-certificates-issuing

cinder-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" in 1.033s (1.033s including waiting). Image size: 1084192222 bytes.

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-675ba-backup-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" in 880ms (880ms including waiting). Image size: 1083255579 bytes.

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

cinder-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

cinder-675ba-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine

openstack

kubelet

cinder-675ba-api-0

Started

Started container cinder-675ba-api-log

openstack

kubelet

dnsmasq-dns-78fdb4cf6c-nxlpt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-78fdb4cf6c-nxlpt

Started

Started container init

openstack

kubelet

dnsmasq-dns-78fdb4cf6c-nxlpt

Created

Created container: init

openstack

kubelet

cinder-675ba-api-0

Created

Created container: cinder-675ba-api-log

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

cinder-public-svc

Generated

Stored new private key in temporary Secret resource "cinder-public-svc-52v96"

openstack

cert-manager-certificates-request-manager

cinder-public-svc

Requested

Created new CertificateRequest resource "cinder-public-svc-1"

openstack

cert-manager-certificates-issuing

cinder-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-78fdb4cf6c-nxlpt

Started

Started container dnsmasq-dns

openstack

kubelet

cinder-675ba-api-0

Created

Created container: cinder-api

openstack

kubelet

cinder-675ba-api-0

Started

Started container cinder-api

openstack

kubelet

cinder-675ba-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

dnsmasq-dns-78fdb4cf6c-nxlpt

Created

Created container: dnsmasq-dns

openstack

kubelet

cinder-675ba-backup-0

Started

Started container probe

openstack

kubelet

cinder-675ba-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-675ba-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" already present on machine

openstack

kubelet

cinder-675ba-backup-0

Created

Created container: probe

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" already present on machine

openstack

kubelet

cinder-675ba-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" already present on machine

openstack

kubelet

cinder-675ba-backup-0

Started

Started container cinder-backup

openstack

kubelet

cinder-675ba-backup-0

Created

Created container: cinder-backup

openstack

statefulset-controller

cinder-675ba-api

SuccessfulDelete

delete Pod cinder-675ba-api-0 in StatefulSet cinder-675ba-api successful

openstack

kubelet

cinder-675ba-scheduler-0

Created

Created container: probe

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

kubelet

cinder-675ba-scheduler-0

Started

Started container probe

openstack

kubelet

cinder-675ba-api-0

Killing

Stopping container cinder-api

openstack

kubelet

cinder-675ba-api-0

Killing

Stopping container cinder-675ba-api-log

openstack

kubelet

cinder-675ba-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine

openstack

multus

cinder-675ba-api-0

AddedInterface

Add eth0 [10.128.0.230/23] from ovn-kubernetes
(x2)

openstack

statefulset-controller

cinder-675ba-api

SuccessfulCreate

create Pod cinder-675ba-api-0 in StatefulSet cinder-675ba-api successful

openstack

default-scheduler

cinder-675ba-api-0

Scheduled

Successfully assigned openstack/cinder-675ba-api-0 to master-0

openstack

kubelet

cinder-675ba-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine

openstack

kubelet

cinder-675ba-api-0

Created

Created container: cinder-675ba-api-log

openstack

kubelet

cinder-675ba-api-0

Started

Started container cinder-675ba-api-log

openstack

job-controller

ironic-inspector-c3c2-account-create-update

SuccessfulCreate

Created pod: ironic-inspector-c3c2-account-create-update-w6k86

openstack

default-scheduler

ironic-inspector-c3c2-account-create-update-w6k86

Scheduled

Successfully assigned openstack/ironic-inspector-c3c2-account-create-update-w6k86 to master-0

openstack

default-scheduler

ironic-neutron-agent-7dffdc6989-dw4bq

Scheduled

Successfully assigned openstack/ironic-neutron-agent-7dffdc6989-dw4bq to master-0

openstack

replicaset-controller

ironic-neutron-agent-7dffdc6989

SuccessfulCreate

Created pod: ironic-neutron-agent-7dffdc6989-dw4bq

openstack

replicaset-controller

dnsmasq-dns-6bf78b7

SuccessfulCreate

Created pod: dnsmasq-dns-6bf78b7-cqc9l

openstack

kubelet

dnsmasq-dns-78fdb4cf6c-nxlpt

Killing

Stopping container dnsmasq-dns

openstack

default-scheduler

ironic-inspector-db-create-hzm2x

Scheduled

Successfully assigned openstack/ironic-inspector-db-create-hzm2x to master-0

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

var-lib-ironic-ironic-conductor-0

Provisioning

External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0"
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

replicaset-controller

dnsmasq-dns-78fdb4cf6c

SuccessfulDelete

Deleted pod: dnsmasq-dns-78fdb4cf6c-nxlpt

openstack

deployment-controller

ironic-neutron-agent

ScalingReplicaSet

Scaled up replica set ironic-neutron-agent-7dffdc6989 to 1

openstack

replicaset-controller

ironic-657ddbd5bb

SuccessfulCreate

Created pod: ironic-657ddbd5bb-fdfgw

openstack

job-controller

ironic-db-sync

Completed

Job completed
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-657ddbd5bb to 1

openstack

metallb-controller

ironic-internal

IPAllocated

Assigned IP ["172.20.1.80"]

openstack

default-scheduler

ironic-657ddbd5bb-fdfgw

Scheduled

Successfully assigned openstack/ironic-657ddbd5bb-fdfgw to master-0

openstack

default-scheduler

dnsmasq-dns-6bf78b7-cqc9l

Scheduled

Successfully assigned openstack/dnsmasq-dns-6bf78b7-cqc9l to master-0

openstack

job-controller

ironic-inspector-db-create

SuccessfulCreate

Created pod: ironic-inspector-db-create-hzm2x

openstack

kubelet

cinder-675ba-api-0

Created

Created container: cinder-api

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

kubelet

cinder-675ba-api-0

Started

Started container cinder-api

openstack

cert-manager-certificates-trigger

ironic-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

multus

ironic-inspector-db-create-hzm2x

AddedInterface

Add eth0 [10.128.0.231/23] from ovn-kubernetes

openstack

kubelet

ironic-inspector-db-create-hzm2x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

ironic-inspector-c3c2-account-create-update-w6k86

AddedInterface

Add eth0 [10.128.0.233/23] from ovn-kubernetes

openstack

statefulset-controller

cinder-675ba-scheduler

SuccessfulDelete

delete Pod cinder-675ba-scheduler-0 in StatefulSet cinder-675ba-scheduler successful

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

dnsmasq-dns-6bf78b7-cqc9l

AddedInterface

Add eth0 [10.128.0.235/23] from ovn-kubernetes

openstack

cert-manager-certificates-key-manager

ironic-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-internal-svc-sqnsf"

openstack

cert-manager-certificates-request-manager

ironic-internal-svc

Requested

Created new CertificateRequest resource "ironic-internal-svc-1"

openstack

cert-manager-certificates-issuing

ironic-internal-svc

Issuing

The certificate has been successfully issued

openstack

multus

ironic-657ddbd5bb-fdfgw

AddedInterface

Add eth0 [10.128.0.234/23] from ovn-kubernetes

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified"

openstack

kubelet

ironic-inspector-c3c2-account-create-update-w6k86

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-6bf78b7-cqc9l

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

ironic-inspector-db-create-hzm2x

Created

Created container: mariadb-database-create

openstack

kubelet

ironic-inspector-db-create-hzm2x

Started

Started container mariadb-database-create

openstack

multus

ironic-neutron-agent-7dffdc6989-dw4bq

AddedInterface

Add eth0 [10.128.0.232/23] from ovn-kubernetes

openstack

topolvm.io_lvms-operator-fcd55dd45-6z56x_971af561-93f3-47ad-ae91-2e8ac9889acc

var-lib-ironic-ironic-conductor-0

ProvisioningSucceeded

Successfully provisioned volume pvc-8f10eb9b-d44c-4f28-b4e5-ca4c08dc4418

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

ironic-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-approver

ironic-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-675ba-backup-0

Killing

Stopping container cinder-backup

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-675ba-backup-0

Killing

Stopping container probe

openstack

statefulset-controller

cinder-675ba-backup

SuccessfulDelete

delete Pod cinder-675ba-backup-0 in StatefulSet cinder-675ba-backup successful

openstack

kubelet

dnsmasq-dns-6bf78b7-cqc9l

Created

Created container: init

openstack

cert-manager-certificates-issuing

ironic-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ironic-public-svc

Requested

Created new CertificateRequest resource "ironic-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ironic-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-public-svc-5qc6q"

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Killing

Stopping container cinder-volume

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Killing

Stopping container probe

openstack

kubelet

dnsmasq-dns-6bf78b7-cqc9l

Started

Started container init

openstack

kubelet

cinder-675ba-scheduler-0

Killing

Stopping container probe

openstack

kubelet

cinder-675ba-scheduler-0

Killing

Stopping container cinder-scheduler

openstack

cert-manager-certificates-trigger

ironic-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

ironic-inspector-c3c2-account-create-update-w6k86

Created

Created container: mariadb-account-create-update

openstack

kubelet

ironic-inspector-c3c2-account-create-update-w6k86

Started

Started container mariadb-account-create-update

openstack

statefulset-controller

cinder-675ba-volume-lvm-iscsi

SuccessfulDelete

delete Pod cinder-675ba-volume-lvm-iscsi-0 in StatefulSet cinder-675ba-volume-lvm-iscsi successful
(x2)

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

replicaset-controller

ironic-565c7fbf46

SuccessfulCreate

Created pod: ironic-565c7fbf46-lqmmt

openstack

cert-manager-certificates-issuing

ironic-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

ironic-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-565c7fbf46 to 1

openstack

cert-manager-certificates-key-manager

ironic-public-route

Generated

Stored new private key in temporary Secret resource "ironic-public-route-s555r"

openstack

cert-manager-certificates-request-manager

ironic-public-route

Requested

Created new CertificateRequest resource "ironic-public-route-1"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

ironic-565c7fbf46-lqmmt

Scheduled

Successfully assigned openstack/ironic-565c7fbf46-lqmmt to master-0

openstack

kubelet

dnsmasq-dns-6bf78b7-cqc9l

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

default-scheduler

ironic-conductor-0

Scheduled

Successfully assigned openstack/ironic-conductor-0 to master-0

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified" in 3.632s (3.632s including waiting). Image size: 655324502 bytes.

openstack

kubelet

dnsmasq-dns-6bf78b7-cqc9l

Created

Created container: dnsmasq-dns

openstack

default-scheduler

cinder-675ba-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-675ba-volume-lvm-iscsi-0 to master-0

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" in 4.07s (4.07s including waiting). Image size: 536338720 bytes.

openstack

default-scheduler

cinder-675ba-scheduler-0

Scheduled

Successfully assigned openstack/cinder-675ba-scheduler-0 to master-0
(x2)

openstack

statefulset-controller

cinder-675ba-volume-lvm-iscsi

SuccessfulCreate

create Pod cinder-675ba-volume-lvm-iscsi-0 in StatefulSet cinder-675ba-volume-lvm-iscsi successful

openstack

kubelet

dnsmasq-dns-6bf78b7-cqc9l

Started

Started container dnsmasq-dns
(x2)

openstack

statefulset-controller

cinder-675ba-scheduler

SuccessfulCreate

create Pod cinder-675ba-scheduler-0 in StatefulSet cinder-675ba-scheduler successful

openstack

kubelet

ironic-565c7fbf46-lqmmt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine

openstack

job-controller

ironic-inspector-db-create

Completed

Job completed

openstack

kubelet

cinder-675ba-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" already present on machine

openstack

multus

ironic-565c7fbf46-lqmmt

AddedInterface

Add eth0 [10.128.0.237/23] from ovn-kubernetes

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Started

Started container init

openstack

kubelet

ironic-565c7fbf46-lqmmt

Created

Created container: init

openstack

kubelet

ironic-565c7fbf46-lqmmt

Started

Started container init

openstack

multus

cinder-675ba-scheduler-0

AddedInterface

Add eth0 [10.128.0.238/23] from ovn-kubernetes

openstack

multus

ironic-conductor-0

AddedInterface

Add eth0 [10.128.0.236/23] from ovn-kubernetes

openstack

multus

ironic-conductor-0

AddedInterface

Add ironic [172.20.1.31/24] from openstack/ironic

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" already present on machine

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Created

Created container: init
(x2)

openstack

statefulset-controller

cinder-675ba-backup

SuccessfulCreate

create Pod cinder-675ba-backup-0 in StatefulSet cinder-675ba-backup successful

openstack

default-scheduler

cinder-675ba-backup-0

Scheduled

Successfully assigned openstack/cinder-675ba-backup-0 to master-0

openstack

job-controller

ironic-inspector-c3c2-account-create-update

Completed

Job completed

openstack

multus

cinder-675ba-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" already present on machine

openstack

multus

cinder-675ba-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.239/23] from ovn-kubernetes

openstack

kubelet

cinder-675ba-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" already present on machine

openstack

multus

cinder-675ba-backup-0

AddedInterface

Add eth0 [10.128.0.240/23] from ovn-kubernetes

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" already present on machine

openstack

kubelet

ironic-565c7fbf46-lqmmt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

kubelet

cinder-675ba-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" already present on machine

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

ironic-565c7fbf46-lqmmt

Created

Created container: ironic-api-log

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Created

Created container: ironic-api-log

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Started

Started container ironic-api-log

openstack

kubelet

ironic-conductor-0

Started

Started container init

openstack

kubelet

cinder-675ba-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

ironic-conductor-0

Created

Created container: init

openstack

kubelet

cinder-675ba-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-675ba-backup-0

Created

Created container: cinder-backup

openstack

kubelet

cinder-675ba-scheduler-0

Created

Created container: probe

openstack

replicaset-controller

placement-76cc655964

SuccessfulDelete

Deleted pod: placement-76cc655964-lxxvl

openstack

kubelet

placement-76cc655964-lxxvl

Killing

Stopping container placement-api

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled down replica set placement-76cc655964 to 0 from 1

openstack

kubelet

ironic-565c7fbf46-lqmmt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine

openstack

kubelet

cinder-675ba-backup-0

Started

Started container cinder-backup

openstack

kubelet

cinder-675ba-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-675ba-scheduler-0

Started

Started container probe

openstack

kubelet

placement-76cc655964-lxxvl

Killing

Stopping container placement-log

openstack

kubelet

cinder-675ba-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" already present on machine

openstack

kubelet

ironic-565c7fbf46-lqmmt

Started

Started container ironic-api-log

openstack

kubelet

placement-76cc655964-lxxvl

Unhealthy

Readiness probe failed: Get "https://10.128.0.220:8778/": EOF

openstack

kubelet

placement-76cc655964-lxxvl

Unhealthy

Readiness probe failed: Get "https://10.128.0.220:8778/": EOF

openstack

kubelet

dnsmasq-dns-846459fb55-9x6r8

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-846459fb55

SuccessfulDelete

Deleted pod: dnsmasq-dns-846459fb55-9x6r8

openstack

kubelet

ironic-565c7fbf46-lqmmt

Created

Created container: ironic-api

openstack

kubelet

ironic-565c7fbf46-lqmmt

Started

Started container ironic-api

openstack

kubelet

cinder-675ba-backup-0

Started

Started container probe

openstack

metallb-speaker

keystone-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

cinder-675ba-backup-0

Created

Created container: probe

openstack

kubelet

cinder-675ba-api-0

Unhealthy

Readiness probe failed: Get "https://10.128.0.230:8776/healthcheck": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified"

openstack

default-scheduler

openstackclient

Scheduled

Successfully assigned openstack/openstackclient to master-0

openstack

multus

openstackclient

AddedInterface

Add eth0 [10.128.0.241/23] from ovn-kubernetes
(x2)

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Started

Started container ironic-api
(x2)

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Created

Created container: ironic-api

openstack

kubelet

openstackclient

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified"
(x2)

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine

openstack

job-controller

ironic-inspector-db-sync

SuccessfulCreate

Created pod: ironic-inspector-db-sync-sksvm

openstack

default-scheduler

ironic-inspector-db-sync-sksvm

Scheduled

Successfully assigned openstack/ironic-inspector-db-sync-sksvm to master-0

openstack

metallb-speaker

cinder-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

placement-76cc655964-lxxvl

Unhealthy

Readiness probe failed: Get "https://10.128.0.220:8778/": read tcp 10.128.0.2:36074->10.128.0.220:8778: read: connection reset by peer

openstack

kubelet

placement-76cc655964-lxxvl

Unhealthy

Readiness probe failed: Get "https://10.128.0.220:8778/": read tcp 10.128.0.2:36068->10.128.0.220:8778: read: connection reset by peer

openstack

multus

ironic-inspector-db-sync-sksvm

AddedInterface

Add eth0 [10.128.0.242/23] from ovn-kubernetes

openstack

kubelet

ironic-inspector-db-sync-sksvm

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified"
(x3)

openstack

metallb-speaker

placement-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

kubelet

ironic-657ddbd5bb-fdfgw

BackOff

Back-off restarting failed container ironic-api in pod ironic-657ddbd5bb-fdfgw_openstack(85f7cb75-9466-47eb-bd3a-da17df2b5c2a)

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled down replica set ironic-657ddbd5bb to 0 from 1

openstack

kubelet

ironic-657ddbd5bb-fdfgw

Killing

Stopping container ironic-api-log

openstack

replicaset-controller

ironic-657ddbd5bb

SuccessfulDelete

Deleted pod: ironic-657ddbd5bb-fdfgw

openstack

default-scheduler

swift-proxy-856bf8b6f6-t9lvl

Scheduled

Successfully assigned openstack/swift-proxy-856bf8b6f6-t9lvl to master-0

openstack

deployment-controller

swift-proxy

ScalingReplicaSet

Scaled up replica set swift-proxy-856bf8b6f6 to 1

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled down replica set neutron-78756bd8 to 0 from 1

openstack

replicaset-controller

neutron-78756bd8

SuccessfulDelete

Deleted pod: neutron-78756bd8-c6jzz

openstack

replicaset-controller

swift-proxy-856bf8b6f6

SuccessfulCreate

Created pod: swift-proxy-856bf8b6f6-t9lvl

openstack

kubelet

neutron-78756bd8-c6jzz

Killing

Stopping container neutron-httpd

openstack

job-controller

nova-cell0-91a3-account-create-update

SuccessfulCreate

Created pod: nova-cell0-91a3-account-create-update-jwmlg

openstack

default-scheduler

nova-cell0-db-create-kg45w

Scheduled

Successfully assigned openstack/nova-cell0-db-create-kg45w to master-0

openstack

default-scheduler

nova-cell0-91a3-account-create-update-jwmlg

Scheduled

Successfully assigned openstack/nova-cell0-91a3-account-create-update-jwmlg to master-0

openstack

job-controller

nova-api-da7e-account-create-update

SuccessfulCreate

Created pod: nova-api-da7e-account-create-update-6k64t

openstack

job-controller

nova-cell1-db-create

SuccessfulCreate

Created pod: nova-cell1-db-create-rztgz

openstack

job-controller

nova-api-db-create

SuccessfulCreate

Created pod: nova-api-db-create-45pj6

openstack

default-scheduler

nova-cell1-db-create-rztgz

Scheduled

Successfully assigned openstack/nova-cell1-db-create-rztgz to master-0

openstack

job-controller

nova-cell1-40fc-account-create-update

SuccessfulCreate

Created pod: nova-cell1-40fc-account-create-update-hfwpl

openstack

job-controller

nova-cell0-db-create

SuccessfulCreate

Created pod: nova-cell0-db-create-kg45w

openstack

default-scheduler

nova-cell1-40fc-account-create-update-hfwpl

Scheduled

Successfully assigned openstack/nova-cell1-40fc-account-create-update-hfwpl to master-0

openstack

default-scheduler

nova-api-da7e-account-create-update-6k64t

Scheduled

Successfully assigned openstack/nova-api-da7e-account-create-update-6k64t to master-0

openstack

kubelet

neutron-78756bd8-c6jzz

Killing

Stopping container neutron-api

openstack

default-scheduler

nova-api-db-create-45pj6

Scheduled

Successfully assigned openstack/nova-api-db-create-45pj6 to master-0

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

Unhealthy

Liveness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 4792fdf65f1907ff7e2565afb3a964e9ef62317dd71687a74905d3610b602a64 is running failed: container process not found

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

Unhealthy

Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 4792fdf65f1907ff7e2565afb3a964e9ef62317dd71687a74905d3610b602a64 is running failed: container process not found

openstack

kubelet

ironic-inspector-db-sync-sksvm

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" in 7.278s (7.278s including waiting). Image size: 539743830 bytes.

openstack

kubelet

ironic-inspector-db-sync-sksvm

Created

Created container: ironic-inspector-db-sync

openstack

kubelet

ironic-inspector-db-sync-sksvm

Started

Started container ironic-inspector-db-sync
(x3)

openstack

metallb-speaker

ironic-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

nova-cell1-40fc-account-create-update-hfwpl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

nova-cell1-db-create-rztgz

AddedInterface

Add eth0 [10.128.0.247/23] from ovn-kubernetes

openstack

multus

nova-api-da7e-account-create-update-6k64t

AddedInterface

Add eth0 [10.128.0.246/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-db-create-rztgz

Started

Started container mariadb-database-create

openstack

kubelet

nova-cell1-db-create-rztgz

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell1-db-create-rztgz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

nova-cell1-40fc-account-create-update-hfwpl

AddedInterface

Add eth0 [10.128.0.249/23] from ovn-kubernetes

openstack

kubelet

nova-api-da7e-account-create-update-6k64t

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" already present on machine

openstack

multus

nova-cell0-91a3-account-create-update-jwmlg

AddedInterface

Add eth0 [10.128.0.248/23] from ovn-kubernetes

openstack

multus

nova-cell0-db-create-kg45w

AddedInterface

Add eth0 [10.128.0.245/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-91a3-account-create-update-jwmlg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

kubelet

nova-cell0-db-create-kg45w

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine
(x4)

openstack

metallb-speaker

neutron-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

multus

nova-api-db-create-45pj6

AddedInterface

Add eth0 [10.128.0.244/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-db-create-kg45w

Created

Created container: mariadb-database-create

openstack

kubelet

nova-api-db-create-45pj6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine

openstack

multus

swift-proxy-856bf8b6f6-t9lvl

AddedInterface

Add eth0 [10.128.0.243/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-91a3-account-create-update-jwmlg

Created

Created container: mariadb-account-create-update

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Created

Created container: proxy-httpd

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Started

Started container proxy-httpd

openstack

kubelet

nova-api-da7e-account-create-update-6k64t

Created

Created container: mariadb-account-create-update

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" already present on machine

openstack

kubelet

nova-api-da7e-account-create-update-6k64t

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-cell0-91a3-account-create-update-jwmlg

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-cell0-db-create-kg45w

Started

Started container mariadb-database-create

openstack

kubelet

nova-api-db-create-45pj6

Started

Started container mariadb-database-create

openstack

kubelet

nova-api-db-create-45pj6

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell1-40fc-account-create-update-hfwpl

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-cell1-40fc-account-create-update-hfwpl

Created

Created container: mariadb-account-create-update
(x3)

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

BackOff

Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-7dffdc6989-dw4bq_openstack(a94dba9c-1e25-42ed-b30a-d278979d1de9)

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Created

Created container: proxy-server

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Started

Started container proxy-server

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Unhealthy

Liveness probe failed: HTTP probe failed with statuscode: 503
(x4)

openstack

kubelet

swift-proxy-856bf8b6f6-t9lvl

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x2)

openstack

statefulset-controller

glance-1280f-default-external-api

SuccessfulDelete

delete Pod glance-1280f-default-external-api-0 in StatefulSet glance-1280f-default-external-api successful

openstack

kubelet

glance-1280f-default-external-api-0

Killing

Stopping container glance-log

openstack

kubelet

glance-1280f-default-external-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" in 25.898s (25.898s including waiting). Image size: 785155373 bytes.

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-python-agent-init
(x2)

openstack

statefulset-controller

glance-1280f-default-internal-api

SuccessfulDelete

delete Pod glance-1280f-default-internal-api-0 in StatefulSet glance-1280f-default-internal-api successful

openstack

kubelet

openstackclient

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" in 24.262s (24.262s including waiting). Image size: 594485614 bytes.

openstack

kubelet

openstackclient

Started

Started container openstackclient

openstack

kubelet

openstackclient

Created

Created container: openstackclient

openstack

kubelet

glance-1280f-default-internal-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-1280f-default-internal-api-0

Killing

Stopping container glance-log

openstack

job-controller

nova-cell1-db-create

Completed

Job completed
(x2)

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified" already present on machine

openstack

job-controller

nova-api-da7e-account-create-update

Completed

Job completed

openstack

job-controller

nova-api-db-create

Completed

Job completed

openstack

job-controller

nova-cell0-db-create

Completed

Job completed

openstack

metallb-speaker

swift-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

nova-cell0-91a3-account-create-update

Completed

Job completed

openstack

job-controller

nova-cell1-40fc-account-create-update

Completed

Job completed

openstack

job-controller

ironic-inspector-db-sync

Completed

Job completed
(x3)

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

Created

Created container: ironic-neutron-agent

openstack

default-scheduler

nova-cell0-conductor-db-sync-wcsvr

Scheduled

Successfully assigned openstack/nova-cell0-conductor-db-sync-wcsvr to master-0

openstack

job-controller

nova-cell0-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell0-conductor-db-sync-wcsvr
(x3)

openstack

kubelet

ironic-neutron-agent-7dffdc6989-dw4bq

Started

Started container ironic-neutron-agent
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

default-scheduler

dnsmasq-dns-77475956d7-pp5mp

Scheduled

Successfully assigned openstack/dnsmasq-dns-77475956d7-pp5mp to master-0

openstack

multus

nova-cell0-conductor-db-sync-wcsvr

AddedInterface

Add eth0 [10.128.0.250/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-conductor-db-sync-wcsvr

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified"
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

replicaset-controller

dnsmasq-dns-77475956d7

SuccessfulCreate

Created pod: dnsmasq-dns-77475956d7-pp5mp

openstack

metallb-controller

ironic-inspector-internal

IPAllocated

Assigned IP ["172.20.1.80"]

openstack

default-scheduler

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

cert-manager-certificates-key-manager

ironic-inspector-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-jfwmg"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ironic-inspector-internal-svc

Requested

Created new CertificateRequest resource "ironic-inspector-internal-svc-1"

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

multus

dnsmasq-dns-77475956d7-pp5mp

AddedInterface

Add eth0 [10.128.0.251/23] from ovn-kubernetes

openstack

cert-manager-certificates-issuing

ironic-inspector-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.252/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

ironic-inspector-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-approver

ironic-inspector-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

default-scheduler

glance-1280f-default-external-api-0

Scheduled

Successfully assigned openstack/glance-1280f-default-external-api-0 to master-0

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ironic-inspector-public-svc

Issuing

Issuing certificate as Secret does not exist
(x3)

openstack

statefulset-controller

glance-1280f-default-external-api

SuccessfulCreate

create Pod glance-1280f-default-external-api-0 in StatefulSet glance-1280f-default-external-api successful

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified"

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Started

Started container init

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-svc-kw7bz"

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-svc

Requested

Created new CertificateRequest resource "ironic-inspector-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" already present on machine

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Created

Created container: init

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init
(x3)

openstack

statefulset-controller

glance-1280f-default-internal-api

SuccessfulCreate

create Pod glance-1280f-default-internal-api-0 in StatefulSet glance-1280f-default-internal-api successful

openstack

default-scheduler

glance-1280f-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-1280f-default-internal-api-0 to master-0

openstack

kubelet

glance-1280f-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

multus

glance-1280f-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

cert-manager-certificates-issuing

ironic-inspector-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

glance-1280f-default-external-api-0

AddedInterface

Add eth0 [10.128.0.253/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

ironic-inspector-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-route

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-route-qlg4l"

openstack

statefulset-controller

ironic-inspector

SuccessfulDelete

delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

glance-1280f-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

kubelet

glance-1280f-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-1280f-default-external-api-0

Created

Created container: glance-log

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

ironic-inspector-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-route

Requested

Created new CertificateRequest resource "ironic-inspector-public-route-1"

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

glance-1280f-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-1280f-default-external-api-0

Started

Started container glance-httpd

openstack

replicaset-controller

dnsmasq-dns-6bf78b7

SuccessfulDelete

Deleted pod: dnsmasq-dns-6bf78b7-cqc9l

openstack

kubelet

dnsmasq-dns-6bf78b7-cqc9l

Killing

Stopping container dnsmasq-dns

openstack

kubelet

ironic-inspector-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" in 10.806s (10.806s including waiting). Image size: 657221885 bytes.

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" in 10.832s (10.832s including waiting). Image size: 657221885 bytes.

openstack

multus

glance-1280f-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.254/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-conductor-db-sync-wcsvr

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" in 14.029s (14.029s including waiting). Image size: 668208107 bytes.

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-conductor-0

Started

Started container pxe-init

openstack

kubelet

ironic-conductor-0

Created

Created container: pxe-init

openstack

kubelet

glance-1280f-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

kubelet

glance-1280f-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

nova-cell0-conductor-db-sync-wcsvr

Started

Started container nova-cell0-conductor-db-sync

openstack

kubelet

nova-cell0-conductor-db-sync-wcsvr

Created

Created container: nova-cell0-conductor-db-sync

openstack

kubelet

glance-1280f-default-internal-api-0

Created

Created container: glance-log

openstack

multus

glance-1280f-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

kubelet

ironic-inspector-0

Killing

Stopping container inspector-pxe-init

openstack

kubelet

glance-1280f-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine

openstack

kubelet

glance-1280f-default-internal-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-1280f-default-internal-api-0

Started

Started container glance-httpd
(x2)

openstack

statefulset-controller

ironic-inspector

SuccessfulCreate

create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

default-scheduler

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.255/23] from ovn-kubernetes

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" already present on machine

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-httpboot

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-httpboot

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-dnsmasq

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-dnsmasq

openstack

kubelet

ironic-inspector-0

Created

Created container: ramdisk-logs

openstack

kubelet

ironic-inspector-0

Started

Started container ramdisk-logs

openstack

metallb-speaker

ironic-inspector-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

metallb-speaker

glance-default-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

statefulset-controller

nova-cell0-conductor

SuccessfulCreate

create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful

openstack

default-scheduler

nova-cell0-conductor-0

Scheduled

Successfully assigned openstack/nova-cell0-conductor-0 to master-0

openstack

job-controller

nova-cell0-conductor-db-sync

Completed

Job completed

openstack

kubelet

nova-cell0-conductor-0

Created

Created container: nova-cell0-conductor-conductor

openstack

kubelet

nova-cell0-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine

openstack

kubelet

nova-cell0-conductor-0

Started

Started container nova-cell0-conductor-conductor

openstack

multus

nova-cell0-conductor-0

AddedInterface

Add eth0 [10.128.1.0/23] from ovn-kubernetes

openstack

statefulset-controller

nova-cell1-compute-ironic-compute

SuccessfulCreate

create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful

openstack

default-scheduler

nova-cell1-compute-ironic-compute-0

Scheduled

Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0

openstack

default-scheduler

nova-cell0-cell-mapping-xkrf7

Scheduled

Successfully assigned openstack/nova-cell0-cell-mapping-xkrf7 to master-0

openstack

job-controller

nova-cell0-cell-mapping

SuccessfulCreate

Created pod: nova-cell0-cell-mapping-xkrf7

openstack

multus

nova-cell0-cell-mapping-xkrf7

AddedInterface

Add eth0 [10.128.1.1/23] from ovn-kubernetes

openstack

default-scheduler

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

default-scheduler

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

kubelet

nova-cell0-cell-mapping-xkrf7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine

openstack

kubelet

nova-cell0-cell-mapping-xkrf7

Started

Started container nova-manage

openstack

kubelet

nova-cell0-cell-mapping-xkrf7

Created

Created container: nova-manage

openstack

default-scheduler

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

metallb-controller

nova-metadata-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

multus

nova-cell1-compute-ironic-compute-0

AddedInterface

Add eth0 [10.128.1.2/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-67f5b4fdc9

SuccessfulCreate

Created pod: dnsmasq-dns-67f5b4fdc9-swznp

openstack

job-controller

nova-cell1-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell1-conductor-db-sync-7wljq

openstack

default-scheduler

nova-cell1-conductor-db-sync-7wljq

Scheduled

Successfully assigned openstack/nova-cell1-conductor-db-sync-7wljq to master-0
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

default-scheduler

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

default-scheduler

dnsmasq-dns-67f5b4fdc9-swznp

Scheduled

Successfully assigned openstack/dnsmasq-dns-67f5b4fdc9-swznp to master-0

openstack

cert-manager-certificates-trigger

nova-metadata-internal-svc

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.5/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-cell1-novncproxy-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified"

openstack

cert-manager-certificates-request-manager

nova-metadata-internal-svc

Requested

Created new CertificateRequest resource "nova-metadata-internal-svc-1"

openstack

cert-manager-certificates-issuing

nova-metadata-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

nova-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified"

openstack

cert-manager-certificates-key-manager

nova-metadata-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-metadata-internal-svc-h8vmn"

openstack

cert-manager-certificaterequests-approver

nova-metadata-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.6/23] from ovn-kubernetes

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.4/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

cert-manager-certificaterequests-issuer-venafi

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-api-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified"

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.3/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-acme

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-cell1-conductor-db-sync-7wljq

AddedInterface

Add eth0 [10.128.1.7/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-conductor-db-sync-7wljq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

nova-cell1-conductor-db-sync-7wljq

Created

Created container: nova-cell1-conductor-db-sync

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

nova-cell1-conductor-db-sync-7wljq

Started

Started container nova-cell1-conductor-db-sync

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-route

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-58rkj"

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-route

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1"

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-svc

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1"

openstack

multus

dnsmasq-dns-67f5b4fdc9-swznp

AddedInterface

Add eth0 [10.128.1.8/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Created

Created container: init

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Started

Started container init

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-svc

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-b5lbz"

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-vencrypt

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1"

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-vencrypt

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-sjgst"

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-vencrypt

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-vencrypt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulDelete

delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-vencrypt

Issuing

The certificate has been successfully issued

openstack

kubelet

nova-api-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" in 4.195s (4.195s including waiting). Image size: 685002983 bytes.

openstack

kubelet

nova-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified" in 4.194s (4.194s including waiting). Image size: 668208104 bytes.

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified" in 3.902s (3.902s including waiting). Image size: 670568433 bytes.

openstack

kubelet

nova-metadata-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" in 3.911s (3.911s including waiting). Image size: 685002983 bytes.

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Created

Created container: dnsmasq-dns

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-cell1-novncproxy-0

Killing

Stopping container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Started

Started container dnsmasq-dns

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.9/23] from ovn-kubernetes

openstack

default-scheduler

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.3:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Killing

Stopping container dnsmasq-dns

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.3:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

replicaset-controller

dnsmasq-dns-77475956d7

SuccessfulDelete

Deleted pod: dnsmasq-dns-77475956d7-pp5mp

openstack

kubelet

dnsmasq-dns-77475956d7-pp5mp

Unhealthy

Readiness probe failed: dial tcp 10.128.0.251:5353: connect: connection refused

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Started

Started container nova-cell1-compute-ironic-compute-compute

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified" in 16.17s (16.17s including waiting). Image size: 1216409983 bytes.

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Created

Created container: nova-cell1-compute-ironic-compute-compute

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" already present on machine

openstack

job-controller

nova-cell0-cell-mapping

Completed

Job completed

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-conductor

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-conductor

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.9:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.9:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

default-scheduler

nova-cell1-conductor-0

Scheduled

Successfully assigned openstack/nova-cell1-conductor-0 to master-0

openstack

statefulset-controller

nova-cell1-conductor

SuccessfulCreate

create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful

openstack

job-controller

nova-cell1-conductor-db-sync

Completed

Job completed

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine

openstack

kubelet

ironic-conductor-0

Started

Started container httpboot

openstack

kubelet

ironic-conductor-0

Created

Created container: httpboot

openstack

kubelet

ironic-conductor-0

Created

Created container: dnsmasq

openstack

kubelet

nova-cell1-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine

openstack

multus

nova-cell1-conductor-0

AddedInterface

Add eth0 [10.128.1.10/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

ironic-conductor-0

Started

Started container dnsmasq

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-cell1-conductor-0

Created

Created container: nova-cell1-conductor-conductor

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openstack

kubelet

nova-cell1-conductor-0

Started

Started container nova-cell1-conductor-conductor

openstack

kubelet

ironic-conductor-0

Unhealthy

Startup probe failed: ironic-conductor-0 is offline

openstack

default-scheduler

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

default-scheduler

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.11/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.12/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified" already present on machine

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

default-scheduler

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.13/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.11:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.11:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.13:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x2)

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulCreate

create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.13:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

default-scheduler

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified" already present on machine

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.14/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

metallb-controller

nova-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

nova-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

default-scheduler

dnsmasq-dns-764cc67dbc-n94p5

Scheduled

Successfully assigned openstack/dnsmasq-dns-764cc67dbc-n94p5 to master-0

openstack

cert-manager-certificates-key-manager

nova-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-internal-svc-p9kj8"

openstack

replicaset-controller

dnsmasq-dns-764cc67dbc

SuccessfulCreate

Created pod: dnsmasq-dns-764cc67dbc-n94p5

openstack

cert-manager-certificates-request-manager

nova-internal-svc

Requested

Created new CertificateRequest resource "nova-internal-svc-1"

openstack

kubelet

dnsmasq-dns-764cc67dbc-n94p5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

multus

dnsmasq-dns-764cc67dbc-n94p5

AddedInterface

Add eth0 [10.128.1.15/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-764cc67dbc-n94p5

Started

Started container init

openstack

cert-manager-certificates-issuing

nova-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-public-svc

Requested

Created new CertificateRequest resource "nova-public-svc-1"

openstack

cert-manager-certificates-key-manager

nova-public-svc

Generated

Stored new private key in temporary Secret resource "nova-public-svc-4h2wv"

openstack

cert-manager-certificates-trigger

nova-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

nova-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-764cc67dbc-n94p5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine

openstack

kubelet

dnsmasq-dns-764cc67dbc-n94p5

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

nova-public-route

Requested

Created new CertificateRequest resource "nova-public-route-1"

openstack

cert-manager-certificates-key-manager

nova-public-route

Generated

Stored new private key in temporary Secret resource "nova-public-route-kgpgk"

openstack

kubelet

dnsmasq-dns-764cc67dbc-n94p5

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-764cc67dbc-n94p5

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

nova-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

job-controller

nova-cell1-cell-mapping

SuccessfulCreate

Created pod: nova-cell1-cell-mapping-wn7mj

openstack

default-scheduler

nova-cell1-host-discover-s5c66

Scheduled

Successfully assigned openstack/nova-cell1-host-discover-s5c66 to master-0

openstack

job-controller

nova-cell1-host-discover

SuccessfulCreate

Created pod: nova-cell1-host-discover-s5c66

openstack

default-scheduler

nova-cell1-cell-mapping-wn7mj

Scheduled

Successfully assigned openstack/nova-cell1-cell-mapping-wn7mj to master-0

openstack

kubelet

nova-cell1-cell-mapping-wn7mj

Started

Started container nova-manage

openstack

multus

nova-cell1-host-discover-s5c66

AddedInterface

Add eth0 [10.128.1.17/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-host-discover-s5c66

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine

openstack

multus

nova-cell1-cell-mapping-wn7mj

AddedInterface

Add eth0 [10.128.1.16/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-cell-mapping-wn7mj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine

openstack

kubelet

nova-cell1-cell-mapping-wn7mj

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-host-discover-s5c66

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-host-discover-s5c66

Started

Started container nova-manage
(x11)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-nodes of Type *v1.Service
(x11)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-nodes of Type *v1.Service

openstack

default-scheduler

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.18/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine
(x24)

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

(combined from similar events): Scaled down replica set dnsmasq-dns-67f5b4fdc9 to 0 from 1

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-67f5b4fdc9

SuccessfulDelete

Deleted pod: dnsmasq-dns-67f5b4fdc9-swznp

openstack

job-controller

nova-cell1-host-discover

Completed

Job completed

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log
(x2)

openstack

statefulset-controller

nova-scheduler

SuccessfulDelete

delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful
(x3)

openstack

statefulset-controller

nova-metadata

SuccessfulDelete

delete Pod nova-metadata-0 in StatefulSet nova-metadata successful
(x3)

openstack

statefulset-controller

nova-api

SuccessfulDelete

delete Pod nova-api-0 in StatefulSet nova-api successful

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

job-controller

nova-cell1-cell-mapping

Completed

Job completed
(x4)

openstack

statefulset-controller

nova-api

SuccessfulCreate

create Pod nova-api-0 in StatefulSet nova-api successful

openstack

default-scheduler

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.19/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-67f5b4fdc9-swznp

Unhealthy

Readiness probe failed: dial tcp 10.128.1.8:5353: i/o timeout

openstack

default-scheduler

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0
(x4)

openstack

statefulset-controller

nova-metadata

SuccessfulCreate

create Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.20/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine

openstack

default-scheduler

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0
(x3)

openstack

statefulset-controller

nova-scheduler

SuccessfulCreate

create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified" already present on machine

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.21/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.13:8775/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.13:8775/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.19:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.19:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.20:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.20:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x3)

openstack

metallb-speaker

nova-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

metallb-speaker

nova-metadata-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled down replica set sushy-emulator-78f6d7d749 to 0 from 1

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-mx5qs

Killing

Stopping container sushy-emulator

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulDelete

Deleted pod: sushy-emulator-78f6d7d749-mx5qs

sushy-emulator

replicaset-controller

sushy-emulator-84965d5d88

SuccessfulCreate

Created pod: sushy-emulator-84965d5d88-ffft9

sushy-emulator

default-scheduler

sushy-emulator-84965d5d88-ffft9

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-84965d5d88-ffft9 to master-0

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-84965d5d88 to 1

sushy-emulator

kubelet

sushy-emulator-84965d5d88-ffft9

Pulled

Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" already present on machine

sushy-emulator

kubelet

sushy-emulator-84965d5d88-ffft9

Started

Started container sushy-emulator

sushy-emulator

multus

sushy-emulator-84965d5d88-ffft9

AddedInterface

Add eth0 [10.128.1.22/23] from ovn-kubernetes

sushy-emulator

kubelet

sushy-emulator-84965d5d88-ffft9

Created

Created container: sushy-emulator

sushy-emulator

multus

sushy-emulator-84965d5d88-ffft9

AddedInterface

Add ironic [172.20.1.71/24] from sushy-emulator/ironic

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

default-scheduler

keystone-cron-29548861-l62vw

Scheduled

Successfully assigned openstack/keystone-cron-29548861-l62vw to master-0

openstack

multus

keystone-cron-29548861-l62vw

AddedInterface

Add eth0 [10.128.1.23/23] from ovn-kubernetes

openstack

cronjob-controller

keystone-cron

SuccessfulCreate

Created job keystone-cron-29548861

openstack

job-controller

keystone-cron-29548861

SuccessfulCreate

Created pod: keystone-cron-29548861-l62vw

openstack

kubelet

keystone-cron-29548861-l62vw

Created

Created container: keystone-cron

openstack

kubelet

keystone-cron-29548861-l62vw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" already present on machine

openstack

kubelet

keystone-cron-29548861-l62vw

Started

Started container keystone-cron

openstack

job-controller

keystone-cron-29548861

Completed

Job completed

openstack

cronjob-controller

keystone-cron

SawCompletedJob

Saw completed job: keystone-cron-29548861, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-tj2hj namespace