Time Namespace Component RelatedObject Reason Message

openshift-operator-lifecycle-manager

package-server-manager-5c75f78c8b-9d82f

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f to master-0

openshift-route-controller-manager

route-controller-manager-85f8857db4-hhqvj

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj to master-0

assisted-installer

assisted-installer-controller-r6zx7

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-r6zx7 to master-0

openstack-operators

placement-operator-controller-manager-8497b45c89-8xrtm

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm to master-0

openstack-operators

ovn-operator-controller-manager-5955d8c787-zbd8b

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b to master-0

openshift-operator-lifecycle-manager

olm-operator-5499d7f7bb-8xdmq

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq to master-0

openstack-operators

swift-operator-controller-manager-68f46476f-tc9k2

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2 to master-0

openstack-operators

watcher-operator-controller-manager-bccc79885-96xg2

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2 to master-0

assisted-installer

assisted-installer-controller-r6zx7

FailedScheduling

no nodes available to schedule pods

cert-manager

cert-manager-545d4d4674-ss7w9

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-ss7w9 to master-0

openshift-monitoring

metrics-server-7bf9b765b9-b9fxz

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7bf9b765b9-b9fxz to master-0

openshift-monitoring

metrics-server-65cdf565cd-555rj

Scheduled

Successfully assigned openshift-monitoring/metrics-server-65cdf565cd-555rj to master-0

openshift-monitoring

kube-state-metrics-59584d565f-gsgxz

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-59584d565f-gsgxz to master-0

openstack-operators

telemetry-operator-controller-manager-589c568786-9ljm5

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5 to master-0

openstack-operators

test-operator-controller-manager-5dc6794d5b-96zg4

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4 to master-0

openstack-operators

watcher-operator-controller-manager-bccc79885-96xg2

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2 to master-0

cert-manager

cert-manager-cainjector-5545bd876-vhdf8

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-vhdf8 to master-0

sushy-emulator

nova-console-poller-5bbdbdc4dc-t2lxm

Scheduled

Successfully assigned sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm to master-0

openshift-operator-lifecycle-manager

packageserver-df5f88cd4-cwzcs

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-gmzdr

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr to master-0

openshift-cluster-samples-operator

cluster-samples-operator-65c5c48b9b-hmlsl

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs to master-0

openshift-monitoring

cluster-monitoring-operator-6bb6d78bf-mzb7q

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q to master-0

openshift-monitoring

cluster-monitoring-operator-6bb6d78bf-mzb7q

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

cert-manager

cert-manager-webhook-6888856db4-pxvzq

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-pxvzq to master-0

openshift-machine-api

machine-api-operator-5c7cf458b4-65mc5

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5 to master-0

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt to master-0

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-54hnv

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv to master-0

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz to master-0

openshift-operators

observability-operator-59bdc8b94-tgbdb

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-tgbdb to master-0

openshift-operators

perses-operator-5bf474d74f-jbdsj

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-jbdsj to master-0

openshift-ovn-kubernetes

ovnkube-control-plane-5d8dfcdc87-b8ght

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght to master-0

openshift-cluster-node-tuning-operator

tuned-2w6mj

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-2w6mj to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531910-4pps5

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531895-57vmb

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb to master-0

sushy-emulator

nova-console-recorder-7b97cdbf9f-vzh2n

Scheduled

Successfully assigned sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531880-xpxmc

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531865-5wmht

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531850-l54gb

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb to master-0

openshift-cluster-storage-operator

csi-snapshot-controller-6847bb4785-vqn96

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96 to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-5d87bf58c-ncrqj

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-5d87bf58c-ncrqj

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29531835-tsgrz

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531835-tsgrz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29531835-tsgrz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

csi-snapshot-controller-operator-6fb4df594f-8tv99

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

csi-snapshot-controller-operator-6fb4df594f-8tv99

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99 to master-0

openshift-operator-lifecycle-manager

catalog-operator-596f79dd6f-v22h2

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2 to master-0

openshift-operator-controller

operator-controller-controller-manager-9cc7d7bb-t75jj

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj to master-0

openshift-oauth-apiserver

apiserver-6f8b7f45f7-5df4m

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m to master-0

openshift-nmstate

nmstate-webhook-866bcb46dc-qp4cm

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm to master-0

openshift-nmstate

nmstate-operator-694c9596b7-8jfxc

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-8jfxc to master-0

openshift-nmstate

nmstate-metrics-58c85c668d-c85cm

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-c85cm to master-0

openshift-nmstate

nmstate-handler-r6rsr

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-r6rsr to master-0

openshift-cluster-version

cluster-version-operator-57476485-7g2gq

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-57476485-7g2gq to master-0

openshift-nmstate

nmstate-console-plugin-5c78fc5d65-447df

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df to master-0

openshift-cluster-version

cluster-version-operator-5cfd9759cf-r4rf2

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2 to master-0

openshift-network-operator

network-operator-7d7db75979-4fk6k

Scheduled

Successfully assigned openshift-network-operator/network-operator-7d7db75979-4fk6k to master-0

openshift-ovn-kubernetes

ovnkube-node-jtdzc

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-jtdzc to master-0

openshift-network-operator

mtu-prober-cg7zd

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-cg7zd to master-0

openshift-config-operator

openshift-config-operator-6f47d587d6-7b87v

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v to master-0

openshift-network-operator

iptables-alerter-r2vvc

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-r2vvc to master-0

openshift-cluster-olm-operator

cluster-olm-operator-5bd7768f54-qh6j7

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7 to master-0

openshift-cluster-olm-operator

cluster-olm-operator-5bd7768f54-qh6j7

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

tuned-2w6mj

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-2w6mj to master-0

openshift-network-node-identity

network-node-identity-rlg4x

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-rlg4x to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4 to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4 to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-machine-approver

machine-approver-7dd9c7d7b9-pb6sw

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw to master-0

openshift-ovn-kubernetes

ovnkube-node-vd82q

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-vd82q to master-0

openshift-cluster-machine-approver

machine-approver-798b897698-6hgvq

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq to master-0

openshift-insights

insights-operator-59b498fcfb-mprnx

Scheduled

Successfully assigned openshift-insights/insights-operator-59b498fcfb-mprnx to master-0

openshift-cloud-credential-operator

cloud-credential-operator-6968c58f46-68rth

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth to master-0

openshift-route-controller-manager

route-controller-manager-56fdc6b8c6-52tgv

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv to master-0

sushy-emulator

sushy-emulator-84965d5d88-5549q

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-84965d5d88-5549q to master-0

openshift-console

console-576fb8b7f5-srlps

Scheduled

Successfully assigned openshift-console/console-576fb8b7f5-srlps to master-0

openshift-network-diagnostics

network-check-target-vp2jg

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-vp2jg to master-0

openshift-route-controller-manager

route-controller-manager-654dcf5585-fgmnd

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd to master-0

openshift-route-controller-manager

route-controller-manager-65c596ccd9-k8nq7

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-65c596ccd9-k8nq7

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq to master-0

openshift-network-diagnostics

network-check-source-58fb6744f5-kn2z7

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7 to master-0

openshift-network-diagnostics

network-check-source-58fb6744f5-kn2z7

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-route-controller-manager

route-controller-manager-7bcb58f8c7-49bnf

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf to master-0

openshift-network-diagnostics

network-check-source-58fb6744f5-kn2z7

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-console

networking-console-plugin-79f587d78f-bctpb

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-79f587d78f-bctpb to master-0

openshift-console

console-5b6cfdbd-5qbf5

Scheduled

Successfully assigned openshift-console/console-5b6cfdbd-5qbf5 to master-0

openshift-route-controller-manager

route-controller-manager-85f8857db4-hhqvj

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh to master-0

openshift-route-controller-manager

route-controller-manager-85ff64b64d-965rz

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-85ff64b64d-965rz

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t to master-0

openshift-multus

network-metrics-daemon-2vsjh

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-2vsjh to master-0

openshift-service-ca

service-ca-576b4d78bd-fsmrl

Scheduled

Successfully assigned openshift-service-ca/service-ca-576b4d78bd-fsmrl to master-0

openshift-console

console-67bcb9df49-d2cv6

Scheduled

Successfully assigned openshift-console/console-67bcb9df49-d2cv6 to master-0

openshift-multus

multus-admission-controller-5f98f4f8d5-b985k

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f98f4f8d5-b985k to master-0

openshift-multus

multus-admission-controller-5f98f4f8d5-b985k

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-service-ca-operator

service-ca-operator-c48c8bf7c-mcdrl

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-service-ca-operator

service-ca-operator-c48c8bf7c-mcdrl

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl to master-0

openshift-multus

multus-admission-controller-5f54bf67d4-5tf9t

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t to master-0

openshift-console

console-6d5c5b46fd-qr4b5

Scheduled

Successfully assigned openshift-console/console-6d5c5b46fd-qr4b5 to master-0

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs to master-0

openshift-console

console-6f64db7f86-6brp5

Scheduled

Successfully assigned openshift-console/console-6f64db7f86-6brp5 to master-0

openshift-console

console-7875b98987-bmnll

Scheduled

Successfully assigned openshift-console/console-7875b98987-bmnll to master-0

openshift-storage

lvms-operator-7fd9747c7b-h8dsz

Scheduled

Successfully assigned openshift-storage/lvms-operator-7fd9747c7b-h8dsz to master-0

openshift-storage

vg-manager-r5t2w

Scheduled

Successfully assigned openshift-storage/vg-manager-r5t2w to master-0

openstack-operators

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Scheduled

Successfully assigned openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz to master-0

openstack-operators

openstack-operator-index-tptx6

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-tptx6 to master-0

openstack-operators

barbican-operator-controller-manager-868647ff47-rngmn

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn to master-0

openstack-operators

openstack-operator-index-2pkfs

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-2pkfs to master-0

openstack-operators

openstack-operator-controller-manager-5dc486cffc-rbqzr

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr to master-0

openstack-operators

openstack-operator-controller-init-55c649df44-8xq4x

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j to master-0

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-z4h54

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54 to master-0

openstack-operators

nova-operator-controller-manager-567668f5cf-sfjt8

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8 to master-0

openstack-operators

neutron-operator-controller-manager-6bd4687957-svmn2

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2 to master-0

openstack-operators

mariadb-operator-controller-manager-6994f66f48-28hdf

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf to master-0

openstack-operators

manila-operator-controller-manager-67d996989d-qbghx

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-qbghx to master-0

openstack-operators

keystone-operator-controller-manager-b4d948c87-zlj5w

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w to master-0

openstack-operators

ironic-operator-controller-manager-554564d7fc-db24j

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j to master-0

openstack-operators

infra-operator-controller-manager-5f879c76b6-bv48m

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m to master-0

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-gmljt

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt to master-0

openstack-operators

heat-operator-controller-manager-69f49c598c-75df9

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-75df9 to master-0

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-m52ng

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng to master-0

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zghgv

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv to master-0

openstack-operators

designate-operator-controller-manager-6d8bf5c495-vq97j

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j to master-0

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-m52ng

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng to master-0

openstack-operators

barbican-operator-controller-manager-868647ff47-rngmn

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn to master-0

openstack-operators

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Scheduled

Successfully assigned openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz to master-0

openstack

swift-storage-0

Scheduled

Successfully assigned openstack/swift-storage-0 to master-0

openstack-operators

designate-operator-controller-manager-6d8bf5c495-vq97j

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j to master-0

openstack

swift-ring-rebalance-th4vs

Scheduled

Successfully assigned openstack/swift-ring-rebalance-th4vs to master-0

openstack

swift-proxy-8695dc84b-bccck

Scheduled

Successfully assigned openstack/swift-proxy-8695dc84b-bccck to master-0

openstack

root-account-create-update-qw6cm

Scheduled

Successfully assigned openstack/root-account-create-update-qw6cm to master-0

openstack

root-account-create-update-7zq2x

Scheduled

Successfully assigned openstack/root-account-create-update-7zq2x to master-0

openshift-ingress-operator

ingress-operator-6569778c84-rr8r7

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-6569778c84-rr8r7 to master-0

openshift-ingress-operator

ingress-operator-6569778c84-rr8r7

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress-canary

ingress-canary-5m82s

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-5m82s to master-0

openshift-ingress

router-default-7b65dc9fcb-zxkt2

Scheduled

Successfully assigned openshift-ingress/router-default-7b65dc9fcb-zxkt2 to master-0

openshift-ingress

router-default-7b65dc9fcb-zxkt2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

router-default-7b65dc9fcb-zxkt2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-image-registry

node-ca-flsqf

Scheduled

Successfully assigned openshift-image-registry/node-ca-flsqf to master-0

openshift-operator-lifecycle-manager

package-server-manager-5c75f78c8b-9d82f

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-image-registry

cluster-image-registry-operator-779979bdf7-t98nr

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-7bcfbc574b-8zrj9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs to master-0

metallb-system

speaker-tds5c

Scheduled

Successfully assigned metallb-system/speaker-tds5c to master-0

metallb-system

metallb-operator-webhook-server-f5b8c49d9-w75vs

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs to master-0

metallb-system

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv to master-0

metallb-system

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g to master-0

metallb-system

frr-k8s-fsm64

Scheduled

Successfully assigned metallb-system/frr-k8s-fsm64 to master-0

metallb-system

controller-69bbfbf88f-hnk7l

Scheduled

Successfully assigned metallb-system/controller-69bbfbf88f-hnk7l to master-0

cert-manager

cert-manager-webhook-6888856db4-pxvzq

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-pxvzq to master-0

cert-manager

cert-manager-cainjector-5545bd876-vhdf8

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-vhdf8 to master-0

cert-manager

cert-manager-545d4d4674-ss7w9

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-ss7w9 to master-0

openshift-monitoring

monitoring-plugin-755c6d6fd4-4ztmm

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm to master-0

openshift-monitoring

node-exporter-qk7rz

Scheduled

Successfully assigned openshift-monitoring/node-exporter-qk7rz to master-0

openstack

rabbitmq-server-0

Scheduled

Successfully assigned openstack/rabbitmq-server-0 to master-0

openshift-monitoring

openshift-state-metrics-6dbff8cb4c-hvjlk

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openstack-operators

test-operator-controller-manager-5dc6794d5b-96zg4

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4 to master-0

openstack

rabbitmq-cell1-server-0

Scheduled

Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0

openshift-monitoring

prometheus-operator-754bc4d665-xjddh

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-754bc4d665-xjddh to master-0

openstack-operators

telemetry-operator-controller-manager-589c568786-9ljm5

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5 to master-0

openstack

placement-fb464bf7d-gv8b6

Scheduled

Successfully assigned openstack/placement-fb464bf7d-gv8b6 to master-0

openstack

placement-db-sync-629gt

Scheduled

Successfully assigned openstack/placement-db-sync-629gt to master-0

openstack

placement-db-create-rfgw2

Scheduled

Successfully assigned openstack/placement-db-create-rfgw2 to master-0

openstack

placement-b69d-account-create-update-7dq92

Scheduled

Successfully assigned openstack/placement-b69d-account-create-update-7dq92 to master-0

openstack

placement-7d9548858-h45cl

Scheduled

Successfully assigned openstack/placement-7d9548858-h45cl to master-0

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-hw4m2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-hw4m2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2 to master-0

openstack-operators

swift-operator-controller-manager-68f46476f-tc9k2

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2 to master-0

openshift-monitoring

telemeter-client-96c995bf5-57k8x

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-96c995bf5-57k8x to master-0

openstack

ovsdbserver-sb-0

Scheduled

Successfully assigned openstack/ovsdbserver-sb-0 to master-0

openstack

ovsdbserver-nb-0

Scheduled

Successfully assigned openstack/ovsdbserver-nb-0 to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh to master-0

metallb-system

controller-69bbfbf88f-hnk7l

Scheduled

Successfully assigned metallb-system/controller-69bbfbf88f-hnk7l to master-0

openshift-monitoring

thanos-querier-d588d74dc-gmlm4

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-d588d74dc-gmlm4 to master-0

openstack-operators

placement-operator-controller-manager-8497b45c89-8xrtm

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm to master-0

openstack-operators

ovn-operator-controller-manager-5955d8c787-zbd8b

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b to master-0

openshift-multus

cni-sysctl-allowlist-ds-j28p2

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-j28p2 to master-0

openshift-multus

multus-8qp5g

Scheduled

Successfully assigned openshift-multus/multus-8qp5g to master-0

openstack

ovn-northd-0

Scheduled

Successfully assigned openstack/ovn-northd-0 to master-0

openstack

ovn-controller-ovs-86mtg

Scheduled

Successfully assigned openstack/ovn-controller-ovs-86mtg to master-0

openstack

ovn-controller-metrics-5kqv6

Scheduled

Successfully assigned openstack/ovn-controller-metrics-5kqv6 to master-0

openstack

ovn-controller-5kh8v

Scheduled

Successfully assigned openstack/ovn-controller-5kh8v to master-0

openshift-multus

multus-additional-cni-plugins-jknmn

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-jknmn to master-0

openstack

openstackclient

Scheduled

Successfully assigned openstack/openstackclient to master-0

metallb-system

frr-k8s-fsm64

Scheduled

Successfully assigned metallb-system/frr-k8s-fsm64 to master-0

openstack

openstackclient

Scheduled

Successfully assigned openstack/openstackclient to master-0

openstack

openstack-galera-0

Scheduled

Successfully assigned openstack/openstack-galera-0 to master-0

openstack

openstack-cell1-galera-0

Scheduled

Successfully assigned openstack/openstack-cell1-galera-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zghgv

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack-operators

openstack-operator-index-tptx6

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-tptx6 to master-0

openstack-operators

openstack-operator-index-2pkfs

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-2pkfs to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

nova-cell1-host-discover-4lbdf

Scheduled

Successfully assigned openstack/nova-cell1-host-discover-4lbdf to master-0

openshift-console

console-d54bc7dc7-5mlqz

Scheduled

Successfully assigned openshift-console/console-d54bc7dc7-5mlqz to master-0

openshift-multus

multus-additional-cni-plugins-jknmn

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-jknmn to master-0

openstack-operators

openstack-operator-controller-manager-5dc486cffc-rbqzr

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr to master-0

openstack-operators

openstack-operator-controller-init-55c649df44-8xq4x

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x to master-0

openshift-multus

multus-8qp5g

Scheduled

Successfully assigned openshift-multus/multus-8qp5g to master-0

openshift-multus

cni-sysctl-allowlist-ds-j28p2

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-j28p2 to master-0

openshift-monitoring

thanos-querier-d588d74dc-gmlm4

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-d588d74dc-gmlm4 to master-0

openshift-monitoring

telemeter-client-96c995bf5-57k8x

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-96c995bf5-57k8x to master-0

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2 to master-0

metallb-system

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g to master-0

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-hw4m2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-hw4m2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-754bc4d665-xjddh

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-754bc4d665-xjddh to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-multus

multus-admission-controller-5f54bf67d4-5tf9t

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t to master-0

openshift-multus

multus-admission-controller-5f98f4f8d5-b985k

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

multus-admission-controller-5f98f4f8d5-b985k

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f98f4f8d5-b985k to master-0

openshift-cluster-storage-operator

cluster-storage-operator-f94476f49-tlmg5

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5 to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-7bcfbc574b-8zrj9

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9 to master-0

metallb-system

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv to master-0

openshift-monitoring

openshift-state-metrics-6dbff8cb4c-hvjlk

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk to master-0

openshift-monitoring

node-exporter-qk7rz

Scheduled

Successfully assigned openshift-monitoring/node-exporter-qk7rz to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j to master-0

openshift-console

downloads-955b69498-crzjg

Scheduled

Successfully assigned openshift-console/downloads-955b69498-crzjg to master-0

openshift-monitoring

monitoring-plugin-755c6d6fd4-4ztmm

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm to master-0

openshift-monitoring

metrics-server-7bf9b765b9-b9fxz

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7bf9b765b9-b9fxz to master-0

openshift-multus

network-metrics-daemon-2vsjh

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-2vsjh to master-0

metallb-system

metallb-operator-webhook-server-f5b8c49d9-w75vs

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs to master-0

openshift-monitoring

metrics-server-65cdf565cd-555rj

Scheduled

Successfully assigned openshift-monitoring/metrics-server-65cdf565cd-555rj to master-0

openshift-monitoring

kube-state-metrics-59584d565f-gsgxz

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-59584d565f-gsgxz to master-0

openshift-console-operator

console-operator-5df5ffc47c-s22jd

Scheduled

Successfully assigned openshift-console-operator/console-operator-5df5ffc47c-s22jd to master-0

openshift-monitoring

cluster-monitoring-operator-6bb6d78bf-mzb7q

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q to master-0

openshift-monitoring

cluster-monitoring-operator-6bb6d78bf-mzb7q

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-z4h54

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54 to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-nmstate

nmstate-console-plugin-5c78fc5d65-447df

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df to master-0

openshift-nmstate

nmstate-handler-r6rsr

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-r6rsr to master-0

openstack-operators

nova-operator-controller-manager-567668f5cf-sfjt8

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8 to master-0

openshift-nmstate

nmstate-metrics-58c85c668d-c85cm

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-c85cm to master-0

openshift-nmstate

nmstate-operator-694c9596b7-8jfxc

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-8jfxc to master-0

openstack-operators

neutron-operator-controller-manager-6bd4687957-svmn2

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2 to master-0

openshift-nmstate

nmstate-webhook-866bcb46dc-qp4cm

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-gmzdr

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr to master-0

metallb-system

speaker-tds5c

Scheduled

Successfully assigned metallb-system/speaker-tds5c to master-0

openstack-operators

mariadb-operator-controller-manager-6994f66f48-28hdf

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz to master-0

openstack-operators

manila-operator-controller-manager-67d996989d-qbghx

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-qbghx to master-0

openshift-operators

observability-operator-59bdc8b94-tgbdb

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-tgbdb to master-0

openshift-operators

perses-operator-5bf474d74f-jbdsj

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-jbdsj to master-0

openstack-operators

keystone-operator-controller-manager-b4d948c87-zlj5w

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w to master-0

openshift-storage

lvms-operator-7fd9747c7b-h8dsz

Scheduled

Successfully assigned openshift-storage/lvms-operator-7fd9747c7b-h8dsz to master-0

openshift-apiserver

apiserver-786f58c449-64k2s

Scheduled

Successfully assigned openshift-apiserver/apiserver-786f58c449-64k2s to master-0

openshift-marketplace

redhat-operators-xm8sw

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-xm8sw to master-0

openshift-marketplace

redhat-marketplace-v64s6

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-v64s6 to master-0

openshift-marketplace

marketplace-operator-6f5488b997-dbsnm

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-6f5488b997-dbsnm to master-0

openshift-apiserver

apiserver-fdc9d7cdd-8v72m

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-fdc9d7cdd-8v72m

Scheduled

Successfully assigned openshift-apiserver/apiserver-fdc9d7cdd-8v72m to master-0

openshift-storage

vg-manager-r5t2w

Scheduled

Successfully assigned openshift-storage/vg-manager-r5t2w to master-0

openstack-operators

ironic-operator-controller-manager-554564d7fc-db24j

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j to master-0

openstack

cinder-3d67-account-create-update-vkpgp

Scheduled

Successfully assigned openstack/cinder-3d67-account-create-update-vkpgp to master-0

openstack

cinder-b7346-api-0

Scheduled

Successfully assigned openstack/cinder-b7346-api-0 to master-0

openstack

cinder-b7346-api-0

Scheduled

Successfully assigned openstack/cinder-b7346-api-0 to master-0

openstack

cinder-b7346-backup-0

Scheduled

Successfully assigned openstack/cinder-b7346-backup-0 to master-0

openstack

cinder-b7346-backup-0

Scheduled

Successfully assigned openstack/cinder-b7346-backup-0 to master-0

openstack

cinder-b7346-db-sync-f9mbk

Scheduled

Successfully assigned openstack/cinder-b7346-db-sync-f9mbk to master-0

openstack

cinder-b7346-scheduler-0

Scheduled

Successfully assigned openstack/cinder-b7346-scheduler-0 to master-0

openstack

cinder-b7346-scheduler-0

Scheduled

Successfully assigned openstack/cinder-b7346-scheduler-0 to master-0

openstack

cinder-b7346-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-b7346-volume-lvm-iscsi-0 to master-0

openshift-marketplace

marketplace-operator-6f5488b997-dbsnm

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Scheduled

Successfully assigned openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h to master-0

openshift-marketplace

community-operators-68vwc

Scheduled

Successfully assigned openshift-marketplace/community-operators-68vwc to master-0

openshift-marketplace

certified-operators-gn8m8

Scheduled

Successfully assigned openshift-marketplace/certified-operators-gn8m8 to master-0

openshift-apiserver-operator

openshift-apiserver-operator-8586dccc9b-49fsv

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

openshift-apiserver-operator-8586dccc9b-49fsv

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv to master-0

openshift-marketplace

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Scheduled

Successfully assigned openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6 to master-0

openshift-marketplace

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Scheduled

Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6 to master-0

openshift-etcd-operator

etcd-operator-545bf96f4d-tfmbs

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs to master-0

openshift-etcd-operator

etcd-operator-545bf96f4d-tfmbs

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv to master-0

openstack

cinder-b7346-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-b7346-volume-lvm-iscsi-0 to master-0

openstack

cinder-db-create-sns65

Scheduled

Successfully assigned openstack/cinder-db-create-sns65 to master-0

openstack

dnsmasq-dns-555687858c-l6w59

Scheduled

Successfully assigned openstack/dnsmasq-dns-555687858c-l6w59 to master-0

openshift-marketplace

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Scheduled

Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f to master-0

openshift-marketplace

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp to master-0

openshift-machine-config-operator

machine-config-server-xxl55

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-xxl55 to master-0

sushy-emulator

sushy-emulator-78f6d7d749-rjgth

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-78f6d7d749-rjgth to master-0

openstack

dnsmasq-dns-55b78786dc-sn557

Scheduled

Successfully assigned openstack/dnsmasq-dns-55b78786dc-sn557 to master-0

openshift-machine-config-operator

machine-config-operator-7f8c75f984-922md

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md to master-0

openshift-machine-config-operator

machine-config-daemon-c56dz

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-c56dz to master-0

openshift-machine-config-operator

machine-config-controller-54cb48566c-9ww5z

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z to master-0

openshift-machine-api

machine-api-operator-5c7cf458b4-65mc5

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5 to master-0

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt to master-0

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-54hnv

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv to master-0

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z to master-0

openshift-controller-manager

controller-manager-557cb6655b-75nhl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-dns-operator

dns-operator-8c7d49845-4dhth

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-8c7d49845-4dhth to master-0

openshift-dns-operator

dns-operator-8c7d49845-4dhth

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-dns

node-resolver-ng8tz

Scheduled

Successfully assigned openshift-dns/node-resolver-ng8tz to master-0

openshift-dns

dns-default-cdk2w

Scheduled

Successfully assigned openshift-dns/dns-default-cdk2w to master-0

openstack

nova-cell1-db-create-4kz4t

Scheduled

Successfully assigned openstack/nova-cell1-db-create-4kz4t to master-0

openstack

nova-cell1-conductor-db-sync-lc7xf

Scheduled

Successfully assigned openstack/nova-cell1-conductor-db-sync-lc7xf to master-0

openstack

nova-cell1-conductor-0

Scheduled

Successfully assigned openstack/nova-cell1-conductor-0 to master-0

openstack

nova-cell1-compute-ironic-compute-0

Scheduled

Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0

openstack

nova-cell1-cell-mapping-l5vzg

Scheduled

Successfully assigned openstack/nova-cell1-cell-mapping-l5vzg to master-0

openstack

nova-cell1-c618-account-create-update-mmq8h

Scheduled

Successfully assigned openstack/nova-cell1-c618-account-create-update-mmq8h to master-0

openstack

nova-cell0-db-create-kzhmb

Scheduled

Successfully assigned openstack/nova-cell0-db-create-kzhmb to master-0

openshift-controller-manager

controller-manager-557cb6655b-75nhl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager-operator

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-584cc7bcb5-zz9fm

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-7657d7494-mmsz6

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-7657d7494-mmsz6 to master-0

openshift-kube-storage-version-migrator

migrator-5c85bff57-txt9d

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d to master-0

openshift-controller-manager

controller-manager-669d5ddb7c-jzjkh

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh to master-0

openshift-controller-manager

controller-manager-669d5ddb7c-jzjkh

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-666d7db58c-6d9wp

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-666d7db58c-6d9wp to master-0

openshift-controller-manager

controller-manager-5b94645546-lgnpc

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5b94645546-lgnpc to master-0

openshift-controller-manager

controller-manager-5b94645546-lgnpc

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-5b94645546-lgnpc

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-fc889cfd5-r6p58

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58 to master-0

openshift-controller-manager

controller-manager-557cb6655b-75nhl

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-557cb6655b-75nhl to master-0

openshift-authentication-operator

authentication-operator-5bd7c86784-kbb8z

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z to master-0

openshift-controller-manager

controller-manager-58c8457759-bzjjl

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-58c8457759-bzjjl to master-0

openshift-controller-manager

controller-manager-58c8457759-bzjjl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-58c8457759-bzjjl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication-operator

authentication-operator-5bd7c86784-kbb8z

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openstack

nova-cell0-conductor-db-sync-ph4c9

Scheduled

Successfully assigned openstack/nova-cell0-conductor-db-sync-ph4c9 to master-0

openstack

dnsmasq-dns-564d4966c5-82kwv

Scheduled

Successfully assigned openstack/dnsmasq-dns-564d4966c5-82kwv to master-0

openstack

dnsmasq-dns-5c55964f59-4n57j

Scheduled

Successfully assigned openstack/dnsmasq-dns-5c55964f59-4n57j to master-0

openstack

dnsmasq-dns-5c685c7df5-nbjbv

Scheduled

Successfully assigned openstack/dnsmasq-dns-5c685c7df5-nbjbv to master-0

openstack

dnsmasq-dns-65c6cc445f-5w2gf

Scheduled

Successfully assigned openstack/dnsmasq-dns-65c6cc445f-5w2gf to master-0

openstack

dnsmasq-dns-66c9d5d889-nmpw7

Scheduled

Successfully assigned openstack/dnsmasq-dns-66c9d5d889-nmpw7 to master-0

openstack

dnsmasq-dns-674c8b7b9c-9fj6z

Scheduled

Successfully assigned openstack/dnsmasq-dns-674c8b7b9c-9fj6z to master-0

openstack

dnsmasq-dns-6974cff98c-2t99f

Scheduled

Successfully assigned openstack/dnsmasq-dns-6974cff98c-2t99f to master-0

openstack

dnsmasq-dns-6fbf68b9d7-p96gq

Scheduled

Successfully assigned openstack/dnsmasq-dns-6fbf68b9d7-p96gq to master-0

openstack

dnsmasq-dns-6fcf8f9d6f-578q8

Scheduled

Successfully assigned openstack/dnsmasq-dns-6fcf8f9d6f-578q8 to master-0

openstack

dnsmasq-dns-77dd9bf7ff-sv6dm

Scheduled

Successfully assigned openstack/dnsmasq-dns-77dd9bf7ff-sv6dm to master-0

openstack

dnsmasq-dns-7c45d57b9c-k22s7

Scheduled

Successfully assigned openstack/dnsmasq-dns-7c45d57b9c-k22s7 to master-0

openstack

dnsmasq-dns-7d4c486879-5m7lz

Scheduled

Successfully assigned openstack/dnsmasq-dns-7d4c486879-5m7lz to master-0

openstack

dnsmasq-dns-7d9d8bd467-64rvv

Scheduled

Successfully assigned openstack/dnsmasq-dns-7d9d8bd467-64rvv to master-0

openstack

dnsmasq-dns-84969fcbcc-27cm6

Scheduled

Successfully assigned openstack/dnsmasq-dns-84969fcbcc-27cm6 to master-0

openstack

dnsmasq-dns-bc7f9869-4lgxt

Scheduled

Successfully assigned openstack/dnsmasq-dns-bc7f9869-4lgxt to master-0

openstack

glance-738d-account-create-update-p9hmm

Scheduled

Successfully assigned openstack/glance-738d-account-create-update-p9hmm to master-0

openstack

glance-bdafd-default-external-api-0

Scheduled

Successfully assigned openstack/glance-bdafd-default-external-api-0 to master-0

openstack-operators

infra-operator-controller-manager-5f879c76b6-bv48m

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m to master-0

openstack

glance-bdafd-default-external-api-0

Scheduled

Successfully assigned openstack/glance-bdafd-default-external-api-0 to master-0

openstack

glance-bdafd-default-external-api-0

Scheduled

Successfully assigned openstack/glance-bdafd-default-external-api-0 to master-0

openstack

glance-bdafd-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-bdafd-default-internal-api-0 to master-0

openstack

glance-bdafd-default-internal-api-0

FailedScheduling

running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods "glance-bdafd-default-internal-api-0": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/openstack/glance-bdafd-default-internal-api-0, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d5222863-cc32-4d32-98a7-5dca4ccf8a53, UID in object meta: 9353daa8-f1c5-493d-8f31-bfc3074c6223

openstack

glance-bdafd-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-bdafd-default-internal-api-0 to master-0

openstack

glance-db-create-62w87

Scheduled

Successfully assigned openstack/glance-db-create-62w87 to master-0

openstack

glance-db-sync-f4vxh

Scheduled

Successfully assigned openstack/glance-db-sync-f4vxh to master-0

openstack

ironic-555fd64789-cgpft

Scheduled

Successfully assigned openstack/ironic-555fd64789-cgpft to master-0

openstack

ironic-6cc9f57487-vklxq

Scheduled

Successfully assigned openstack/ironic-6cc9f57487-vklxq to master-0

openstack

ironic-b901-account-create-update-vmptn

Scheduled

Successfully assigned openstack/ironic-b901-account-create-update-vmptn to master-0

openstack

ironic-conductor-0

Scheduled

Successfully assigned openstack/ironic-conductor-0 to master-0

openstack

ironic-db-create-hgms6

Scheduled

Successfully assigned openstack/ironic-db-create-hgms6 to master-0

openstack

ironic-db-sync-s9d6l

Scheduled

Successfully assigned openstack/ironic-db-sync-s9d6l to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

ironic-inspector-6402-account-create-update-kj7ts

Scheduled

Successfully assigned openstack/ironic-inspector-6402-account-create-update-kj7ts to master-0

openstack

ironic-inspector-db-create-pwcj4

Scheduled

Successfully assigned openstack/ironic-inspector-db-create-pwcj4 to master-0

openstack

ironic-inspector-db-sync-pd272

Scheduled

Successfully assigned openstack/ironic-inspector-db-sync-pd272 to master-0

openstack

ironic-neutron-agent-856d98ff5d-2p7np

Scheduled

Successfully assigned openstack/ironic-neutron-agent-856d98ff5d-2p7np to master-0

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-gmljt

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt to master-0

openstack

keystone-64cf598f88-t2877

Scheduled

Successfully assigned openstack/keystone-64cf598f88-t2877 to master-0

openstack

keystone-7814-account-create-update-vkdnw

Scheduled

Successfully assigned openstack/keystone-7814-account-create-update-vkdnw to master-0

openshift-authentication

oauth-openshift-5584b45765-vxlqk

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-5584b45765-vxlqk

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-5584b45765-vxlqk

FailedScheduling

skip schedule deleting pod: openshift-authentication/oauth-openshift-5584b45765-vxlqk

openstack

keystone-bootstrap-dcp4q

Scheduled

Successfully assigned openstack/keystone-bootstrap-dcp4q to master-0

openstack

keystone-bootstrap-trt9l

Scheduled

Successfully assigned openstack/keystone-bootstrap-trt9l to master-0

openshift-authentication

oauth-openshift-64b7796859-6g644

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-64b7796859-6g644

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-64b7796859-6g644 to master-0

openstack

keystone-cron-29531881-w8jl5

Scheduled

Successfully assigned openstack/keystone-cron-29531881-w8jl5 to master-0

openstack

keystone-db-create-f9qxr

Scheduled

Successfully assigned openstack/keystone-db-create-f9qxr to master-0

openstack

keystone-db-sync-j2nkz

Scheduled

Successfully assigned openstack/keystone-db-sync-j2nkz to master-0

openstack

memcached-0

Scheduled

Successfully assigned openstack/memcached-0 to master-0

openstack

neutron-564b95b965-jqq92

Scheduled

Successfully assigned openstack/neutron-564b95b965-jqq92 to master-0

openshift-authentication

oauth-openshift-6d4d899fc6-cgn6l

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l to master-0

openstack

neutron-828b-account-create-update-4bgnt

Scheduled

Successfully assigned openstack/neutron-828b-account-create-update-4bgnt to master-0

openstack

neutron-d477bdc58-p8d8s

Scheduled

Successfully assigned openstack/neutron-d477bdc58-p8d8s to master-0

openstack

neutron-db-create-fcxq8

Scheduled

Successfully assigned openstack/neutron-db-create-fcxq8 to master-0

openstack

neutron-db-sync-m7xgd

Scheduled

Successfully assigned openstack/neutron-db-sync-m7xgd to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-db-create-qrtq2

Scheduled

Successfully assigned openstack/nova-api-db-create-qrtq2 to master-0

openstack

nova-api-e077-account-create-update-fnxnr

Scheduled

Successfully assigned openstack/nova-api-e077-account-create-update-fnxnr to master-0

openstack-operators

heat-operator-controller-manager-69f49c598c-75df9

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-75df9 to master-0

openstack

nova-cell0-8a9d-account-create-update-hxq4n

Scheduled

Successfully assigned openstack/nova-cell0-8a9d-account-create-update-hxq4n to master-0

openstack

nova-cell0-cell-mapping-fck78

Scheduled

Successfully assigned openstack/nova-cell0-cell-mapping-fck78 to master-0

openstack

nova-cell0-conductor-0

Scheduled

Successfully assigned openstack/nova-cell0-conductor-0 to master-0

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_83e8e791-d4d1-4399-8402-064fa602fc36 became leader

kube-system

Required control plane pods have been created

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_94d9e84d-8a83-4fb0-a4b4-8cc35b2ed75a became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_583db194-0d94-4b1a-b061-6665658a6292 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_c3d70c55-854e-4e6f-82cf-f175b4ce9d5f became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-r6zx7

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_9642d31d-a2c3-4d54-bdf4-253f2cff9e4f became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_c3d70c55-854e-4e6f-82cf-f175b4ce9d5f stopped leading

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-5cfd9759cf to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_1f3dbbc6-0e18-4bdb-95a9-40d6e07e3180 became leader

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-77cd4d9559 to 1

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-5bd7768f54 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-8586dccc9b to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-fc889cfd5 to 1

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-7bcfbc574b to 1

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-7d7db75979 to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-8c7d49845 to 1

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-6f5488b997 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-584cc7bcb5 to 1

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-c48c8bf7c to 1
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-545bf96f4d to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-5bd7c86784 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace
(x9)

assisted-installer

default-scheduler

assisted-installer-controller-r6zx7

FailedScheduling

no nodes available to schedule pods

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-77cd4d9559

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-77cd4d9559-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-5bd7768f54

FailedCreate

Error creating: pods "cluster-olm-operator-5bd7768f54-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-8586dccc9b

FailedCreate

Error creating: pods "openshift-apiserver-operator-8586dccc9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-8c7d49845

FailedCreate

Error creating: pods "dns-operator-8c7d49845-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-network-operator

replicaset-controller

network-operator-7d7db75979

FailedCreate

Error creating: pods "network-operator-7d7db75979-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-c48c8bf7c

FailedCreate

Error creating: pods "service-ca-operator-c48c8bf7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-7bcfbc574b

FailedCreate

Error creating: pods "kube-controller-manager-operator-7bcfbc574b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-584cc7bcb5

FailedCreate

Error creating: pods "openshift-controller-manager-operator-584cc7bcb5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-fc889cfd5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-fc889cfd5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-6fb4df594f to 1
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-5bd7c86784

FailedCreate

Error creating: pods "authentication-operator-5bd7c86784-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-6f5488b997

FailedCreate

Error creating: pods "marketplace-operator-6f5488b997-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-545bf96f4d

FailedCreate

Error creating: pods "etcd-operator-545bf96f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-bcf775fc9 to 1

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-bcf775fc9 to 1
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

FailedCreate

Error creating: pods "cluster-version-operator-5cfd9759cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-6bb6d78bf to 1
(x10)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6fb4df594f

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-6fb4df594f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-6bb6d78bf to 1

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-5c75f78c8b to 1

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-6569778c84 to 1

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-5d87bf58c to 1
(x9)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-ingress-operator

replicaset-controller

ingress-operator-6569778c84

FailedCreate

Error creating: pods "ingress-operator-6569778c84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c75f78c8b

FailedCreate

Error creating: pods "package-server-manager-5c75f78c8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

kube-system

Required control plane pods have been created

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished
(x8)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5d87bf58c

FailedCreate

Error creating: pods "kube-apiserver-operator-5d87bf58c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_74debcee-feea-4a9b-8282-c9aa1ed84855 became leader

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_83d1bc4e-cee4-4a5d-a2cc-9d1c74b2fb8a became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_65e58e15-7c4f-412c-b34e-c85a1ed36bd3 became leader

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x7)

openshift-network-operator

replicaset-controller

network-operator-7d7db75979

FailedCreate

Error creating: pods "network-operator-7d7db75979-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-authentication-operator

replicaset-controller

authentication-operator-5bd7c86784

FailedCreate

Error creating: pods "authentication-operator-5bd7c86784-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-c48c8bf7c

FailedCreate

Error creating: pods "service-ca-operator-c48c8bf7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-dns-operator

replicaset-controller

dns-operator-8c7d49845

FailedCreate

Error creating: pods "dns-operator-8c7d49845-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-77cd4d9559

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-77cd4d9559-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6fb4df594f

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-6fb4df594f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-5bd7768f54

FailedCreate

Error creating: pods "cluster-olm-operator-5bd7768f54-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c75f78c8b

FailedCreate

Error creating: pods "package-server-manager-5c75f78c8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-8586dccc9b

FailedCreate

Error creating: pods "openshift-apiserver-operator-8586dccc9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-marketplace

replicaset-controller

marketplace-operator-6f5488b997

FailedCreate

Error creating: pods "marketplace-operator-6f5488b997-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

FailedCreate

Error creating: pods "cluster-version-operator-5cfd9759cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-etcd-operator

replicaset-controller

etcd-operator-545bf96f4d

FailedCreate

Error creating: pods "etcd-operator-545bf96f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-ingress-operator

replicaset-controller

ingress-operator-6569778c84

FailedCreate

Error creating: pods "ingress-operator-6569778c84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-7bcfbc574b

FailedCreate

Error creating: pods "kube-controller-manager-operator-7bcfbc574b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5d87bf58c

FailedCreate

Error creating: pods "kube-apiserver-operator-5d87bf58c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-584cc7bcb5

FailedCreate

Error creating: pods "openshift-controller-manager-operator-584cc7bcb5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-77cd4d9559

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-77cd4d9559-8l7xv
(x8)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-fc889cfd5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-fc889cfd5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-8586dccc9b

SuccessfulCreate

Created pod: openshift-apiserver-operator-8586dccc9b-49fsv

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

SuccessfulCreate

Created pod: cluster-monitoring-operator-6bb6d78bf-mzb7q

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-5bd7768f54

SuccessfulCreate

Created pod: cluster-olm-operator-5bd7768f54-qh6j7

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

SuccessfulCreate

Created pod: cluster-node-tuning-operator-bcf775fc9-h99t4

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

SuccessfulCreate

Created pod: cluster-monitoring-operator-6bb6d78bf-mzb7q

openshift-authentication-operator

replicaset-controller

authentication-operator-5bd7c86784

SuccessfulCreate

Created pod: authentication-operator-5bd7c86784-kbb8z

openshift-network-operator

replicaset-controller

network-operator-7d7db75979

SuccessfulCreate

Created pod: network-operator-7d7db75979-4fk6k

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c75f78c8b

SuccessfulCreate

Created pod: package-server-manager-5c75f78c8b-9d82f

openshift-service-ca-operator

replicaset-controller

service-ca-operator-c48c8bf7c

SuccessfulCreate

Created pod: service-ca-operator-c48c8bf7c-mcdrl

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6fb4df594f

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-6fb4df594f-8tv99

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

SuccessfulCreate

Created pod: cluster-node-tuning-operator-bcf775fc9-h99t4

openshift-dns-operator

replicaset-controller

dns-operator-8c7d49845

SuccessfulCreate

Created pod: dns-operator-8c7d49845-4dhth

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

SuccessfulCreate

Created pod: cluster-version-operator-5cfd9759cf-r4rf2

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5d87bf58c

SuccessfulCreate

Created pod: kube-apiserver-operator-5d87bf58c-ncrqj

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-fc889cfd5

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-fc889cfd5-r6p58

openshift-etcd-operator

replicaset-controller

etcd-operator-545bf96f4d

SuccessfulCreate

Created pod: etcd-operator-545bf96f4d-tfmbs

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-584cc7bcb5

SuccessfulCreate

Created pod: openshift-controller-manager-operator-584cc7bcb5-zz9fm

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-marketplace

replicaset-controller

marketplace-operator-6f5488b997

SuccessfulCreate

Created pod: marketplace-operator-6f5488b997-dbsnm

openshift-ingress-operator

replicaset-controller

ingress-operator-6569778c84

SuccessfulCreate

Created pod: ingress-operator-6569778c84-rr8r7

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-7bcfbc574b

SuccessfulCreate

Created pod: kube-controller-manager-operator-7bcfbc574b-8zrj9

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83"

assisted-installer

kubelet

assisted-installer-controller-r6zx7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8"

assisted-installer

kubelet

assisted-installer-controller-r6zx7

Started

Started container assisted-installer-controller

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" in 4.391s (4.391s including waiting). Image size: 621542709 bytes.

assisted-installer

kubelet

assisted-installer-controller-r6zx7

Created

Created container: assisted-installer-controller

assisted-installer

kubelet

assisted-installer-controller-r6zx7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8" in 4.421s (4.421s including waiting). Image size: 687849728 bytes.
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_a4306989-6c80-4cc0-8f22-e3295f03b11e became leader
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

Created

Created container: network-operator
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

Started

Started container network-operator

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-cg7zd

openshift-network-operator

kubelet

mtu-prober-cg7zd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine

openshift-network-operator

kubelet

mtu-prober-cg7zd

Started

Started container prober

openshift-network-operator

kubelet

mtu-prober-cg7zd

Created

Created container: prober

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-jknmn

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-jknmn

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-8qp5g

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-8qp5g

openshift-multus

kubelet

multus-8qp5g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd"

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-2vsjh

openshift-multus

kubelet

multus-8qp5g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec"

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-2vsjh

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" in 2.247s (2.247s including waiting). Image size: 528829499 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" in 2.247s (2.247s including waiting). Image size: 528829499 bytes.

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulCreate

Created pod: multus-admission-controller-5f98f4f8d5-b985k

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f98f4f8d5 to 1

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulCreate

Created pod: multus-admission-controller-5f98f4f8d5-b985k

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f98f4f8d5 to 1

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" in 6.818s (6.818s including waiting). Image size: 682963466 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" in 6.818s (6.818s including waiting). Image size: 682963466 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-jtdzc

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-5d8dfcdc87 to 1

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-5d8dfcdc87

SuccessfulCreate

Created pod: ovnkube-control-plane-5d8dfcdc87-b8ght

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-58fb6744f5 to 1

openshift-network-diagnostics

replicaset-controller

network-check-source-58fb6744f5

SuccessfulCreate

Created pod: network-check-source-58fb6744f5-kn2z7

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" in 4.352s (4.352s including waiting). Image size: 411485245 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Started

Started container kube-rbac-proxy

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-vp2jg

openshift-multus

kubelet

multus-8qp5g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" in 14.993s (14.993s including waiting). Image size: 1237794314 bytes.

openshift-multus

kubelet

multus-8qp5g

Created

Created container: kube-multus

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" in 4.352s (4.352s including waiting). Image size: 411485245 bytes.

openshift-multus

kubelet

multus-8qp5g

Started

Started container kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-8qp5g

Started

Started container kube-multus

openshift-multus

kubelet

multus-8qp5g

Created

Created container: kube-multus

openshift-multus

kubelet

multus-8qp5g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" in 14.993s (14.993s including waiting). Image size: 1237794314 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container bond-cni-plugin

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container routeoverride-cni

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" in 1.957s (1.957s including waiting). Image size: 407241636 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" in 1.957s (1.957s including waiting). Image size: 407241636 bytes.

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-rlg4x

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021"

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" in 13.06s (13.06s including waiting). Image size: 875998518 bytes.

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Started

Started container approver

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 17.711s (17.711s including waiting). Image size: 1637274270 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 17.422s (17.422s including waiting). Image size: 1637274270 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Created

Created container: ovnkube-cluster-manager

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 13.615s (13.615s including waiting). Image size: 1637274270 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Started

Started container ovnkube-cluster-manager

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" in 13.06s (13.06s including waiting). Image size: 875998518 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container whereabouts-cni-bincopy

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5d8dfcdc87-b8ght became leader
(x7)

openshift-multus

kubelet

network-metrics-daemon-2vsjh

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Created

Created container: webhook

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: whereabouts-cni-bincopy

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Started

Started container webhook

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container kubecfg-setup
(x7)

openshift-multus

kubelet

network-metrics-daemon-2vsjh

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Created

Created container: approver

openshift-network-node-identity

master-0_0ebcab9a-2fda-4703-8b43-501323b77bf4

ovnkube-identity

LeaderElection

master-0_0ebcab9a-2fda-4703-8b43-501323b77bf4 became leader

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container ovn-controller

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: kube-rbac-proxy-node

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container kube-rbac-proxy-node

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: kube-rbac-proxy-ovn-metrics
(x18)

openshift-multus

kubelet

network-metrics-daemon-2vsjh

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container nbdb

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine
(x18)

openshift-multus

kubelet

network-metrics-daemon-2vsjh

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jknmn

Created

Created container: kube-multus-additional-cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-jtdzc

Started

Started container sbdb

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-jtdzc

default

ovnkube-csr-approver-controller

csr-7wrfp

CSRApproved

CSR "csr-7wrfp" has been approved

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-vd82q

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Started

Started container sbdb
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-r4rf2

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine
(x7)

openshift-network-diagnostics

kubelet

network-check-target-vp2jg

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-ckfnc" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

openshift-ovn-kubernetes

kubelet

ovnkube-node-vd82q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

default

ovnk-controlplane

master-0

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0]
(x18)

openshift-network-diagnostics

kubelet

network-check-target-vp2jg

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-2d6xw

CSRApproved

CSR "csr-2d6xw" has been approved

openshift-service-ca-operator

multus

service-ca-operator-c48c8bf7c-mcdrl

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-r2vvc

openshift-network-operator

kubelet

iptables-alerter-r2vvc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9"

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-mcdrl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83"

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396"

openshift-etcd-operator

multus

etcd-operator-545bf96f4d-tfmbs

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7"

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-fc889cfd5-r6p58

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc"

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-7bcfbc574b-8zrj9

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-apiserver-operator

multus

openshift-apiserver-operator-8586dccc9b-49fsv

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-ncrqj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-5d87bf58c-ncrqj

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-authentication-operator

multus

authentication-operator-5bd7c86784-kbb8z

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-5d87bf58c-ncrqj_9d686ab9-30d4-4e44-8562-31f3ba898aa0 became leader

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e"

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-ncrqj

Started

Started container kube-apiserver-operator

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-584cc7bcb5-zz9fm

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896"

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-ncrqj

Created

Created container: kube-apiserver-operator

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-8zrj9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-6fb4df594f-8tv99

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e"

openshift-cluster-olm-operator

multus

cluster-olm-operator-5bd7768f54-qh6j7

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.33"

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.33"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well")
(x4)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x4)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x4)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x4)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x4)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x4)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x4)

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x4)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-8zrj9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396"

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-mcdrl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-network-operator

kubelet

iptables-alerter-r2vvc

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-network-operator

kubelet

iptables-alerter-r2vvc

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e": rpc error: code = Canceled desc = copying config: context canceled

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Started

Started container copy-catalogd-manifests

openshift-network-diagnostics

kubelet

network-check-target-vp2jg

Started

Started container network-check-target-container

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Created

Created container: copy-catalogd-manifests

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-mcdrl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" in 4.78s (4.78s including waiting). Image size: 508443359 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" in 4.466s (4.466s including waiting). Image size: 447940744 bytes.

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" in 4.852s (4.852s including waiting). Image size: 518279996 bytes.

openshift-network-diagnostics

kubelet

network-check-target-vp2jg

Created

Created container: network-check-target-container

openshift-network-diagnostics

kubelet

network-check-target-vp2jg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e": rpc error: code = Canceled desc = copying config: context canceled

openshift-network-diagnostics

multus

network-check-target-vp2jg

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-8zrj9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" in 4.551s (4.551s including waiting). Image size: 508786786 bytes.

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Failed

Error: ErrImagePull

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-7bcfbc574b-8zrj9_586bc55f-4b79-49cc-a047-f3b2b0aaf7c5 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.33"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"etcd-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-c48c8bf7c-mcdrl_00555996-ffac-470f-bd16-544112a9f413 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-545bf96f4d-tfmbs_b231cddb-0896-4943-849a-a0549b114902 became leader

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.33"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.33"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.33"
(x5)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well")
(x5)

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "controlPlane": map[string]any{"replicas": float64(1)}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false
(x5)

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-r4rf2

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-qdhhk")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-576b4d78bd to 1

openshift-service-ca

replicaset-controller

service-ca-576b4d78bd

SuccessfulCreate

Created pod: service-ca-576b4d78bd-fsmrl

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-service-ca

multus

service-ca-576b4d78bd-fsmrl

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b" in 4.316s (4.316s including waiting). Image size: 494959854 bytes.

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-576b4d78bd-fsmrl_acd85c35-6c89-486e-9cbe-418bd23a79b5 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: EvaluationConditionsDetected changed from Unknown to False ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Created

Created container: copy-operator-controller-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Started

Started container copy-operator-controller-manifests

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.33"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7"

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "admission": map[string]any{ +  "pluginConfig": map[string]any{ +  "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +  }, +  }, +  "apiServerArguments": map[string]any{ +  "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "goaway-chance": []any{string("0")}, +  "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +  "send-retry-after-while-not-ready-once": []any{string("true")}, +  "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +  "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +  "shutdown-delay-duration": []any{string("0s")}, +  }, +  "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +  "gracefulTerminationDuration": string("15"), +  "servicesSubnet": string("172.30.0.0/16"), +  "servingInfo": map[string]any{ +  "bindAddress": string("0.0.0.0:6443"), +  "bindNetwork": string("tcp4"), +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  "namedCertificates": []any{ +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resou"...), +  "keyFile": string("/etc/kubernetes/static-pod-resou"...), +  }, +  }, +  },   }

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc"

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" in 391ms (391ms including waiting). Image size: 504513960 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.33"}]
(x2)

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.33"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-fc889cfd5-r6p58_5b4ea6cd-b6ef-40c6-ad37-efaf8b4c03cb became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7" in 2.265s (2.265s including waiting). Image size: 511059399 bytes.

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-kube-storage-version-migrator

replicaset-controller

migrator-5c85bff57

SuccessfulCreate

Created pod: migrator-5c85bff57-txt9d

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-5c85bff57 to 1

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-qdhhk")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, }

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7"

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" in 405ms (405ms including waiting). Image size: 506291135 bytes.
(x77)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.33"

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-5bd7768f54-qh6j7_e4f9b39f-404b-4882-b6e0-d8984cbde771 became leader

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing
(x2)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" in 404ms (404ms including waiting). Image size: 513119434 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator

multus

migrator-5c85bff57-txt9d

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.33"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.33"}]
(x6)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x6)

openshift-multus

kubelet

network-metrics-daemon-2vsjh

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing
(x6)

openshift-multus

kubelet

network-metrics-daemon-2vsjh

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found
(x6)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-5bd7c86784-kbb8z_fc1f20a4-a077-491f-b58a-115d534fe158 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-77cd4d9559-8l7xv_c3598bc0-17d3-47bb-935b-8e8b0fd1d20e became leader
(x6)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x6)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-txt9d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19"

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-txt9d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015" in 1.717s (1.717s including waiting). Image size: 443170136 bytes.

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-bcf775fc9-h99t4

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2"

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" in 767ms (767ms including waiting). Image size: 512172666 bytes.

openshift-ingress-operator

multus

ingress-operator-6569778c84-rr8r7

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready")

openshift-network-operator

kubelet

iptables-alerter-r2vvc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" in 793ms (793ms including waiting). Image size: 582052489 bytes.
(x2)

openshift-network-operator

kubelet

iptables-alerter-r2vvc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well")
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.33"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-bcf775fc9-h99t4

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."),Upgradeable changed from Unknown to True ("All is well")

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3"

openshift-dns-operator

multus

dns-operator-8c7d49845-4dhth

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-r4rf2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-txt9d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015" already present on machine

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-txt9d

Started

Started container migrator

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n"

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-txt9d

Created

Created container: migrator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-txt9d

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-txt9d

Started

Started container graceful-termination

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-8586dccc9b-49fsv_baba0acb-0f31-441d-be3f-bac450640357 became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.33"}]
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.33"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found",Progressing changed from Unknown to False ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "apiServerArguments": map[string]any{ +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  }, +  "projectConfig": map[string]any{"projectRequestMessage": string("")}, +  "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  }, +  "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}},   }

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443
(x13)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: "

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-7bfhf" has been approved

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-7bfhf" is created for OpenShiftAuthenticatorCertRequester

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-network-operator

kubelet

iptables-alerter-r2vvc

Created

Created container: iptables-alerter
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-network-operator

kubelet

iptables-alerter-r2vvc

Started

Started container iptables-alerter

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" in 6.379s (6.379s including waiting). Image size: 677827184 bytes.

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found"

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" in 6.289s (6.289s including waiting). Image size: 511125422 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" in 3.459s (3.459s including waiting). Image size: 507867630 bytes.

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" in 2.462s (2.462s including waiting). Image size: 506374680 bytes.

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-r4rf2

Started

Started container cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-r4rf2

Created

Created container: cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-r4rf2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" in 6.55s (6.55s including waiting). Image size: 517888569 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3" in 6.32s (6.321s including waiting). Image size: 468159025 bytes.

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" in 6.379s (6.379s including waiting). Image size: 677827184 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_011cd6fe-f24d-4319-a0c6-22ed5d2b2aa1 became leader

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-2w6mj

openshift-cluster-node-tuning-operator

kubelet

tuned-2w6mj

Started

Started container tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-2w6mj

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-2w6mj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4_7d49b5d9-4533-4487-aa1f-c774dc2f60a6

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-h99t4_7d49b5d9-4533-4487-aa1f-c774dc2f60a6 became leader

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-cdk2w

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Started

Started container kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Created

Created container: kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "build": map[string]any{ +  "buildDefaults": map[string]any{"resources": map[string]any{}}, +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7e373bb5"...), +  }, +  }, +  "controllers": []any{ +  string("openshift.io/build"), string("openshift.io/build-config-change"), +  string("openshift.io/builder-rolebindings"), +  string("openshift.io/builder-serviceaccount"), +  string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +  string("openshift.io/deployer-rolebindings"), +  string("openshift.io/deployer-serviceaccount"), +  string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +  string("openshift.io/image-puller-rolebindings"), +  string("openshift.io/image-signature-import"), +  string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +  string("openshift.io/ingress-to-route"), +  string("openshift.io/origin-namespace"), ..., +  }, +  "deployer": map[string]any{ +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f7696d1b6"...), +  }, +  }, +  "featureGates": []any{string("BuildCSIVolumes=true")}, +  "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

Created

Created container: dns-operator

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

Started

Started container dns-operator

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

Created

Created container: kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-8c7d49845-4dhth

Started

Started container kube-rbac-proxy

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-584cc7bcb5-zz9fm_7f3bcebc-c923-4013-9dd9-fc307e71d3c5 became leader

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4_7d49b5d9-4533-4487-aa1f-c774dc2f60a6

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-h99t4_7d49b5d9-4533-4487-aa1f-c774dc2f60a6 became leader

openshift-cluster-node-tuning-operator

kubelet

tuned-2w6mj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine

openshift-cluster-node-tuning-operator

kubelet

tuned-2w6mj

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-2w6mj

Started

Started container tuned

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-77cd4d9559-8l7xv_385e9f59-f8d2-4295-9998-6621c6a8dd53 became leader

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-2w6mj

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing
(x6)

openshift-controller-manager

replicaset-controller

controller-manager-666d7db58c

FailedCreate

Error creating: pods "controller-manager-666d7db58c-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found

openshift-ingress

replicaset-controller

router-default-7b65dc9fcb

SuccessfulCreate

Created pod: router-default-7b65dc9fcb-zxkt2

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"",Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"",Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-7b65dc9fcb to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-6847bb4785

SuccessfulCreate

Created pod: csi-snapshot-controller-6847bb4785-vqn96

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-ng8tz

openshift-dns

kubelet

node-resolver-ng8tz

Started

Started container dns-node-resolver

openshift-dns

kubelet

node-resolver-ng8tz

Created

Created container: dns-node-resolver

openshift-dns

kubelet

node-resolver-ng8tz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" already present on machine

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-9cc7d7bb to 1

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-9cc7d7bb

SuccessfulCreate

Created pod: operator-controller-controller-manager-9cc7d7bb-t75jj

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-84b8d9d697 to 1

openshift-catalogd

replicaset-controller

catalogd-controller-manager-84b8d9d697

SuccessfulCreate

Created pod: catalogd-controller-manager-84b8d9d697-zvzxs

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-6fb4df594f-8tv99_14a33de2-da15-47db-af87-a75ea3f15185 became leader

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-dns

kubelet

dns-default-cdk2w

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-666d7db58c to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found"

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-catalogd

replicaset-controller

catalogd-controller-manager-84b8d9d697

SuccessfulCreate

Created pod: catalogd-controller-manager-84b8d9d697-zvzxs

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-84b8d9d697 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

FailedMount

MountVolume.SetUp failed for volume "ca-certs" : configmap references non-existent config key: ca-bundle.crt

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-6847bb4785 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-666d7db58c

SuccessfulCreate

Created pod: controller-manager-666d7db58c-6d9wp

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/client-ca -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/client-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Started

Started container manager

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

kubelet

controller-manager-666d7db58c-6d9wp

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-cluster-storage-operator

multus

csi-snapshot-controller-6847bb4785-vqn96

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-vqn96

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9"

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7bcb58f8c7

SuccessfulCreate

Created pod: route-controller-manager-7bcb58f8c7-49bnf

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-7bcb58f8c7 to 1

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-operator-controller

multus

operator-controller-controller-manager-9cc7d7bb-t75jj

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes

openshift-catalogd

multus

catalogd-controller-manager-84b8d9d697-zvzxs

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Started

Started container kube-rbac-proxy
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Started

Started container kube-rbac-proxy

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-84b8d9d697-zvzxs

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-dns

kubelet

dns-default-cdk2w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd"

openshift-dns

multus

dns-default-cdk2w

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-7bcb58f8c7-49bnf

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Created

Created container: kube-rbac-proxy

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Started

Started container kube-rbac-proxy

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs_4f38e715-9c52-4311-9e9e-0f641fda31f4

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-zvzxs_4f38e715-9c52-4311-9e9e-0f641fda31f4 became leader

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs_4f38e715-9c52-4311-9e9e-0f641fda31f4

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-zvzxs_4f38e715-9c52-4311-9e9e-0f641fda31f4 became leader
(x3)

openshift-controller-manager

kubelet

controller-manager-666d7db58c-6d9wp

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-7bcb58f8c7-49bnf

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-operator-controller

operator-controller-controller-manager-9cc7d7bb-t75jj_9a4dd530-2be6-49e0-83e9-421a300599b4

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-9cc7d7bb-t75jj_9a4dd530-2be6-49e0-83e9-421a300599b4 became leader

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64"
(x3)

openshift-controller-manager

kubelet

controller-manager-666d7db58c-6d9wp

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-669d5ddb7c

SuccessfulCreate

Created pod: controller-manager-669d5ddb7c-jzjkh

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-7bcb58f8c7 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-65c596ccd9

SuccessfulCreate

Created pod: route-controller-manager-65c596ccd9-k8nq7

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7bcb58f8c7

SuccessfulDelete

Deleted pod: route-controller-manager-7bcb58f8c7-49bnf

openshift-controller-manager

replicaset-controller

controller-manager-666d7db58c

SuccessfulDelete

Deleted pod: controller-manager-666d7db58c-6d9wp

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-65c596ccd9 to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-669d5ddb7c to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-666d7db58c to 0 from 1

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-vqn96

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" in 2.857s (2.857s including waiting). Image size: 463600445 bytes.

openshift-dns

kubelet

dns-default-cdk2w

Started

Started container kube-rbac-proxy

openshift-dns

kubelet

dns-default-cdk2w

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-cdk2w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-dns

kubelet

dns-default-cdk2w

Started

Started container dns

openshift-dns

kubelet

dns-default-cdk2w

Created

Created container: dns

openshift-dns

kubelet

dns-default-cdk2w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd" in 2.937s (2.937s including waiting). Image size: 484074784 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-controller-manager

kubelet

controller-manager-666d7db58c-6d9wp

FailedMount

MountVolume.SetUp failed for volume "client-ca" : object "openshift-controller-manager"/"client-ca" not registered

openshift-controller-manager

kubelet

controller-manager-666d7db58c-6d9wp

FailedMount

MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager"/"config" not registered

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.31/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.33"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.33"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.33"} {"csi-snapshot-controller" "4.18.33"}]

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-786f58c449 to 1

openshift-apiserver

replicaset-controller

apiserver-786f58c449

SuccessfulCreate

Created pod: apiserver-786f58c449-64k2s

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6847bb4785-vqn96

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6847bb4785-vqn96 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5b94645546 to 1 from 0

openshift-controller-manager

kubelet

controller-manager-669d5ddb7c-jzjkh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

multus

controller-manager-669d5ddb7c-jzjkh

AddedInterface

Add eth0 [10.128.0.33/23] from ovn-kubernetes

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-65c596ccd9 to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-669d5ddb7c

SuccessfulDelete

Deleted pod: controller-manager-669d5ddb7c-jzjkh

openshift-controller-manager

replicaset-controller

controller-manager-5b94645546

SuccessfulCreate

Created pod: controller-manager-5b94645546-lgnpc

openshift-route-controller-manager

multus

route-controller-manager-56fdc6b8c6-52tgv

AddedInterface

Add eth0 [10.128.0.34/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-56fdc6b8c6-52tgv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655"

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-56fdc6b8c6 to 1 from 0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-669d5ddb7c to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-65c596ccd9

SuccessfulDelete

Deleted pod: route-controller-manager-65c596ccd9-k8nq7

openshift-route-controller-manager

replicaset-controller

route-controller-manager-56fdc6b8c6

SuccessfulCreate

Created pod: route-controller-manager-56fdc6b8c6-52tgv

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531835

SuccessfulCreate

Created pod: collect-profiles-29531835-tsgrz

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531835

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-multus

multus

network-metrics-daemon-2vsjh

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-multus

multus

multus-admission-controller-5f98f4f8d5-b985k

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-multus

multus

network-metrics-daemon-2vsjh

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-operator-lifecycle-manager

multus

package-server-manager-5c75f78c8b-9d82f

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-monitoring

multus

cluster-monitoring-operator-6bb6d78bf-mzb7q

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing

openshift-marketplace

multus

marketplace-operator-6f5488b997-dbsnm

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-5f98f4f8d5-b985k

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-monitoring

multus

cluster-monitoring-operator-6bb6d78bf-mzb7q

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-786f58c449 to 0 from 1
(x16)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

replicaset-controller

apiserver-786f58c449

SuccessfulDelete

Deleted pod: apiserver-786f58c449-64k2s

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-fdc9d7cdd to 1 from 0

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine
(x5)

openshift-apiserver

kubelet

apiserver-786f58c449-64k2s

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found
(x32)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf"

openshift-controller-manager

kubelet

controller-manager-669d5ddb7c-jzjkh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" in 7.324s (7.324s including waiting). Image size: 558105176 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-56fdc6b8c6-52tgv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" in 7.017s (7.017s including waiting). Image size: 486990304 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656"

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf"

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c"

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-apiserver

replicaset-controller

apiserver-fdc9d7cdd

SuccessfulCreate

Created pod: apiserver-fdc9d7cdd-8v72m

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c"

openshift-controller-manager

kubelet

controller-manager-669d5ddb7c-jzjkh

Started

Started container controller-manager

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-56fdc6b8c6-52tgv_d67ed1e9-b77b-4e70-9517-edfeb4ddec22 became leader

openshift-route-controller-manager

kubelet

route-controller-manager-56fdc6b8c6-52tgv

Started

Started container route-controller-manager

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

kubelet

controller-manager-669d5ddb7c-jzjkh

Created

Created container: controller-manager

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

openshift-controller-manager

kubelet

controller-manager-669d5ddb7c-jzjkh

Killing

Stopping container controller-manager

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-669d5ddb7c-jzjkh became leader

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-56fdc6b8c6-52tgv

Created

Created container: route-controller-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Started

Started container kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Created

Created container: kube-rbac-proxy

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-r4rf2

Killing

Stopping container cluster-version-operator

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

SuccessfulDelete

Deleted pod: cluster-version-operator-5cfd9759cf-r4rf2

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-5cfd9759cf to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_011cd6fe-f24d-4319-a0c6-22ed5d2b2aa1 stopped leading

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "optional secret/webhook-authenticator has been created"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-mks7l" has been approved

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Started

Started container network-metrics-daemon

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

Created

Created container: marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" in 3.426s (3.426s including waiting). Image size: 458025547 bytes.

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" in 3.439s (3.439s including waiting). Image size: 484349508 bytes.

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Created

Created container: multus-admission-controller

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Created

Created container: cluster-monitoring-operator

openshift-controller-manager

kubelet

controller-manager-5b94645546-lgnpc

Unhealthy

Readiness probe failed: Get "https://10.128.0.37:8443/healthz": dial tcp 10.128.0.37:8443: connect: connection refused

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Started

Started container cluster-monitoring-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager

kubelet

controller-manager-5b94645546-lgnpc

ProbeError

Readiness probe error: Get "https://10.128.0.37:8443/healthz": dial tcp 10.128.0.37:8443: connect: connection refused body:

openshift-controller-manager

kubelet

controller-manager-5b94645546-lgnpc

Started

Started container controller-manager

openshift-controller-manager

kubelet

controller-manager-5b94645546-lgnpc

Created

Created container: controller-manager

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-f6jkl" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-mks7l" is created for OpenShiftMonitoringClientCertRequester

openshift-controller-manager

kubelet

controller-manager-5b94645546-lgnpc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Started

Started container multus-admission-controller

openshift-controller-manager

multus

controller-manager-5b94645546-lgnpc

AddedInterface

Add eth0 [10.128.0.37/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" in 3.415s (3.415s including waiting). Image size: 448723134 bytes.

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" in 3.412s (3.412s including waiting). Image size: 456470711 bytes.

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-f6jkl" has been approved

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Created

Created container: network-metrics-daemon

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5b94645546-lgnpc became leader

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" in 3.439s (3.439s including waiting). Image size: 484349508 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Created

Created container: cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-mzb7q

Started

Started container cluster-monitoring-operator

openshift-cluster-version

replicaset-controller

cluster-version-operator-57476485

SuccessfulCreate

Created pod: cluster-version-operator-57476485-7g2gq

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates
(x54)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-f6jkl" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Started

Started container network-metrics-daemon

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-mks7l" is created for OpenShiftMonitoringClientCertRequester

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-57476485 to 1

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Created

Created container: kube-rbac-proxy

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3"

openshift-apiserver

multus

apiserver-fdc9d7cdd-8v72m

AddedInterface

Add eth0 [10.128.0.35/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Created

Created container: kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" in 3.412s (3.412s including waiting). Image size: 456470711 bytes.

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Started

Started container multus-admission-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

Started

Started container marketplace-operator

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" in 3.415s (3.415s including waiting). Image size: 448723134 bytes.

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Created

Created container: network-metrics-daemon

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-75d56db95f to 1

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-75d56db95f to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-75d56db95f

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-75d56db95f-hw4m2

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Started

Started container kube-rbac-proxy

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_1293ba9c-1ae7-48e0-aa2c-70b2f10421eb became leader

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-2vsjh

Started

Started container kube-rbac-proxy

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-75d56db95f

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-75d56db95f-hw4m2

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Started

Started container kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Started

Started container kube-rbac-proxy

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-oauth-apiserver

replicaset-controller

apiserver-6f8b7f45f7

SuccessfulCreate

Created pod: apiserver-6f8b7f45f7-5df4m

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-6f8b7f45f7 to 1

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-controller-manager: cause by changes in data.ca-bundle.crt

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5b94645546 to 0 from 1

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-route-controller-manager: cause by changes in data.ca-bundle.crt

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.33"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-56fdc6b8c6 to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-56fdc6b8c6

SuccessfulDelete

Deleted pod: route-controller-manager-56fdc6b8c6-52tgv

openshift-route-controller-manager

kubelet

route-controller-manager-56fdc6b8c6-52tgv

Killing

Stopping container route-controller-manager

openshift-controller-manager

kubelet

controller-manager-5b94645546-lgnpc

Killing

Stopping container controller-manager
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},    "apiServerArguments": map[string]any{    "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "authentication-token-webhook-config-file": []any{ +  string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), +  }, +  "authentication-token-webhook-version": []any{string("v1")},    "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},    "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},    ... // 6 identical entries    },    "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},    "gracefulTerminationDuration": string("15"),    ... // 2 identical entries   }
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-controller-manager

replicaset-controller

controller-manager-557cb6655b

SuccessfulCreate

Created pod: controller-manager-557cb6655b-75nhl

openshift-controller-manager

replicaset-controller

controller-manager-5b94645546

SuccessfulDelete

Deleted pod: controller-manager-5b94645546-lgnpc

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-85ff64b64d to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-557cb6655b to 1 from 0

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85ff64b64d

SuccessfulCreate

Created pod: route-controller-manager-85ff64b64d-965rz

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3" in 7.194s (7.194s including waiting). Image size: 589275174 bytes.

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" in 10.866s (10.866s including waiting). Image size: 862501144 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-85ff64b64d-965rz

Started

Started container route-controller-manager

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-85ff64b64d-965rz

Created

Created container: route-controller-manager

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-85ff64b64d-965rz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-85ff64b64d-965rz

AddedInterface

Add eth0 [10.128.0.41/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

package-server-manager-5c75f78c8b-9d82f_8891dc3b-a5f0-4335-9a00-ffdb5d70df9f

packageserver-controller-lock

LeaderElection

package-server-manager-5c75f78c8b-9d82f_8891dc3b-a5f0-4335-9a00-ffdb5d70df9f became leader

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-6f8b7f45f7-5df4m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1"

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.40/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-oauth-apiserver

multus

apiserver-6f8b7f45f7-5df4m

AddedInterface

Add eth0 [10.128.0.38/23] from ovn-kubernetes

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-85ff64b64d-965rz_048d15c5-dcb7-4562-bfa2-fca54cf9a69c became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-fdc9d7cdd-8v72m

Created

Created container: openshift-apiserver

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-oauth-apiserver

kubelet

apiserver-6f8b7f45f7-5df4m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1" in 3.405s (3.405s including waiting). Image size: 505244089 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-557cb6655b-75nhl became leader

openshift-controller-manager

multus

controller-manager-557cb6655b-75nhl

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-6f8b7f45f7-5df4m

Started

Started container fix-audit-permissions

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-oauth-apiserver

kubelet

apiserver-6f8b7f45f7-5df4m

Created

Created container: fix-audit-permissions

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-oauth-apiserver

kubelet

apiserver-6f8b7f45f7-5df4m

Started

Started container oauth-apiserver

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-686847ff5f

SuccessfulCreate

Created pod: control-plane-machine-set-operator-686847ff5f-zzvtt

openshift-oauth-apiserver

kubelet

apiserver-6f8b7f45f7-5df4m

Created

Created container: oauth-apiserver

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-686847ff5f to 1

openshift-oauth-apiserver

kubelet

apiserver-6f8b7f45f7-5df4m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1" already present on machine

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-686847ff5f

SuccessfulCreate

Created pod: control-plane-machine-set-operator-686847ff5f-zzvtt

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-686847ff5f to 1

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.33"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.33"}] to [{"operator" "4.18.33"} {"openshift-apiserver" "4.18.33"}]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-machine-api

multus

control-plane-machine-set-operator-686847ff5f-zzvtt

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-machine-api

multus

control-plane-machine-set-operator-686847ff5f-zzvtt

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac"

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac"

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-798b897698 to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/template.openshift.io/v1: 401"

openshift-cluster-machine-approver

replicaset-controller

machine-approver-798b897698

SuccessfulCreate

Created pod: machine-approver-798b897698-6hgvq

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-6hgvq

Started

Started container kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"
(x23)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt_57032a26-124b-4df9-b690-810a9e8039ce

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-zzvtt_57032a26-124b-4df9-b690-810a9e8039ce became leader

openshift-kube-controller-manager

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" in 2.362s (2.362s including waiting). Image size: 470575802 bytes.

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-6hgvq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-6hgvq

Created

Created container: kube-rbac-proxy

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt_57032a26-124b-4df9-b690-810a9e8039ce

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-zzvtt_57032a26-124b-4df9-b690-810a9e8039ce became leader

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-6hgvq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa"

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" in 2.362s (2.362s including waiting). Image size: 470575802 bytes.

openshift-etcd

kubelet

etcd-master-0-master-0

Killing

Stopping container etcdctl

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted
(x2)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

ProbeError

Liveness probe error: Get "https://10.128.0.8:8443/healthz": dial tcp 10.128.0.8:8443: connect: connection refused body:
(x2)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

Unhealthy

Liveness probe failed: Get "https://10.128.0.8:8443/healthz": dial tcp 10.128.0.8:8443: connect: connection refused
(x4)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

Unhealthy

Readiness probe failed: Get "http://10.128.0.22:8080/healthz": dial tcp 10.128.0.22:8080: connect: connection refused
(x4)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

ProbeError

Readiness probe error: Get "http://10.128.0.22:8080/healthz": dial tcp 10.128.0.22:8080: connect: connection refused body:
(x3)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

Unhealthy

Liveness probe failed: Get "http://10.128.0.22:8080/healthz": dial tcp 10.128.0.22:8080: connect: connection refused
(x3)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

ProbeError

Liveness probe error: Get "http://10.128.0.22:8080/healthz": dial tcp 10.128.0.22:8080: connect: connection refused body:

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x4)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

ProbeError

Readiness probe error: Get "http://10.128.0.28:8081/readyz": dial tcp 10.128.0.28:8081: connect: connection refused body:
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Unhealthy

Liveness probe failed: Get "http://10.128.0.28:8081/healthz": dial tcp 10.128.0.28:8081: connect: connection refused
(x4)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

ProbeError

Readiness probe error: Get "http://10.128.0.28:8081/readyz": dial tcp 10.128.0.28:8081: connect: connection refused body:
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

ProbeError

Liveness probe error: Get "http://10.128.0.28:8081/healthz": dial tcp 10.128.0.28:8081: connect: connection refused body:
(x4)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Unhealthy

Readiness probe failed: Get "http://10.128.0.28:8081/readyz": dial tcp 10.128.0.28:8081: connect: connection refused
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Unhealthy

Liveness probe failed: Get "http://10.128.0.28:8081/healthz": dial tcp 10.128.0.28:8081: connect: connection refused
(x4)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Unhealthy

Readiness probe failed: Get "http://10.128.0.28:8081/readyz": dial tcp 10.128.0.28:8081: connect: connection refused
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

ProbeError

Liveness probe error: Get "http://10.128.0.28:8081/healthz": dial tcp 10.128.0.28:8081: connect: connection refused body:
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

ProbeError

Liveness probe error: Get "http://10.128.0.26:8081/healthz": dial tcp 10.128.0.26:8081: connect: connection refused body:
(x5)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

ProbeError

Readiness probe error: Get "http://10.128.0.26:8081/readyz": dial tcp 10.128.0.26:8081: connect: connection refused body:
(x5)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Unhealthy

Readiness probe failed: Get "http://10.128.0.26:8081/readyz": dial tcp 10.128.0.26:8081: connect: connection refused
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Unhealthy

Liveness probe failed: Get "http://10.128.0.26:8081/healthz": dial tcp 10.128.0.26:8081: connect: connection refused
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b" already present on machine
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

Created

Created container: manager
(x2)

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

ProbeError

Liveness probe error: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused body:
(x2)

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

ProbeError

Readiness probe error: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused body:
(x2)

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

Unhealthy

Liveness probe failed: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused
(x2)

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

Unhealthy

Readiness probe failed: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

Created

Created container: controller-manager
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

Started

Started container controller-manager
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

BackOff

Back-off restarting failed container network-operator in pod network-operator-7d7db75979-4fk6k_openshift-network-operator(f77227c8-c52d-4a71-ae1b-792055f6f23d)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-8zrj9

BackOff

Back-off restarting failed container kube-controller-manager-operator in pod kube-controller-manager-operator-7bcfbc574b-8zrj9_openshift-kube-controller-manager-operator(22813c83-2f60-44ad-9624-ad367cec08f7)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

BackOff

Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-fc889cfd5-r6p58_openshift-kube-storage-version-migrator-operator(c3fed34f-b275-42c6-af6c-8de3e6fe0f9e)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-ncrqj

BackOff

Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-5d87bf58c-ncrqj_openshift-kube-apiserver-operator(17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).",status.versions changed from [{"operator" "4.18.33"}] to [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"}]
(x4)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.33"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerWorkloadDegraded: " to "All is well"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get openshiftapiservers.operator.openshift.io cluster)\nAPIServerWorkloadDegraded: "

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5d8dfcdc87-b8ght became leader

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-557cb6655b-75nhl became leader

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6847bb4785-vqn96

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6847bb4785-vqn96 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: s: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0224 05:15:11.993986 1 cmd.go:413] Getting controller reference for node master-0 I0224 05:15:12.083227 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0224 05:15:12.083331 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0224 05:15:12.083349 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0224 05:15:12.095946 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0224 05:15:42.096030 1 cmd.go:524] Getting installer pods for node master-0 F0224 05:15:56.100613 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-kube-scheduler-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-kube-scheduler-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:11.993986 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083227 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083331 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.083349 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.095946 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:42.096030 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:15:56.100613 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-kube-scheduler-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:11.993986 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083227 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083331 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.083349 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.095946 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:42.096030 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:15:56.100613 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-kube-scheduler-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:11.993986 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083227 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083331 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.083349 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.095946 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:42.096030 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:15:56.100613 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-kube-scheduler-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:11.993986 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083227 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083331 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.083349 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.095946 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:42.096030 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:15:56.100613 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-kube-scheduler-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler-recovery)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:11.993986 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083227 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083331 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.083349 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.095946 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:42.096030 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:15:56.100613 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/template.openshift.io/v1: 401" to "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.image.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.project.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.quota.openshift.io)]"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.image.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.project.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.quota.openshift.io)]" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/template.openshift.io/v1: 401"

openshift-cluster-machine-approver

master-0_276f6a28-7dff-4e71-84fa-95c827166372

cluster-machine-approver-leader

LeaderElection

master-0_276f6a28-7dff-4e71-84fa-95c827166372 became leader

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-65c5c48b9b

SuccessfulCreate

Created pod: cluster-samples-operator-65c5c48b9b-hmlsl

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-779979bdf7

SuccessfulCreate

Created pod: cluster-image-registry-operator-779979bdf7-t98nr

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-86b8dc6d6 to 1

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-86b8dc6d6

SuccessfulCreate

Created pod: cluster-autoscaler-operator-86b8dc6d6-mcf2z

openshift-kube-scheduler

kubelet

installer-1-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

multus

installer-1-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-5c7cf458b4 to 1

openshift-machine-api

replicaset-controller

machine-api-operator-5c7cf458b4

SuccessfulCreate

Created pod: machine-api-operator-5c7cf458b4-65mc5

openshift-config-operator

replicaset-controller

openshift-config-operator-6f47d587d6

SuccessfulCreate

Created pod: openshift-config-operator-6f47d587d6-7b87v

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-6f47d587d6 to 1

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-596f79dd6f

SuccessfulCreate

Created pod: catalog-operator-596f79dd6f-v22h2

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-596f79dd6f to 1

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-d6bb9bb76 to 1

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

SuccessfulCreate

Created pod: cluster-baremetal-operator-d6bb9bb76-54hnv

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-7f8c75f984 to 1

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-f94476f49 to 1

openshift-machine-config-operator

replicaset-controller

machine-config-operator-7f8c75f984

SuccessfulCreate

Created pod: machine-config-operator-7f8c75f984-922md

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-f94476f49

SuccessfulCreate

Created pod: cluster-storage-operator-f94476f49-tlmg5

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-5c7cf458b4 to 1

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-cbd75ff8d

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

openshift-machine-api

replicaset-controller

machine-api-operator-5c7cf458b4

SuccessfulCreate

Created pod: machine-api-operator-5c7cf458b4-65mc5

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-cbd75ff8d to 1

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-86b8dc6d6

SuccessfulCreate

Created pod: cluster-autoscaler-operator-86b8dc6d6-mcf2z

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

SuccessfulCreate

Created pod: cluster-baremetal-operator-d6bb9bb76-54hnv

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-d6bb9bb76 to 1

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-5499d7f7bb

SuccessfulCreate

Created pod: olm-operator-5499d7f7bb-8xdmq

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-5499d7f7bb to 1

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-798b897698 to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-654dcf5585

SuccessfulCreate

Created pod: route-controller-manager-654dcf5585-fgmnd

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-65c5c48b9b to 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-654dcf5585 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-85ff64b64d to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85ff64b64d

SuccessfulDelete

Deleted pod: route-controller-manager-85ff64b64d-965rz

openshift-route-controller-manager

kubelet

route-controller-manager-85ff64b64d-965rz

Killing

Stopping container route-controller-manager

openshift-cluster-machine-approver

replicaset-controller

machine-approver-798b897698

SuccessfulDelete

Deleted pod: machine-approver-798b897698-6hgvq

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-6hgvq

Killing

Stopping container machine-approver-controller

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-6hgvq

Killing

Stopping container kube-rbac-proxy

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7657d7494 to 1 from 0

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-779979bdf7 to 1

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-6968c58f46 to 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-557cb6655b to 0 from 1

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-6968c58f46

SuccessfulCreate

Created pod: cloud-credential-operator-6968c58f46-68rth

openshift-controller-manager

replicaset-controller

controller-manager-557cb6655b

SuccessfulDelete

Deleted pod: controller-manager-557cb6655b-75nhl

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-59b498fcfb to 1

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-86b8dc6d6 to 1

openshift-insights

replicaset-controller

insights-operator-59b498fcfb

SuccessfulCreate

Created pod: insights-operator-59b498fcfb-mprnx

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_e3733791-58d8-4e77-98b4-fc483709579a became leader

openshift-controller-manager

kubelet

controller-manager-557cb6655b-75nhl

Killing

Stopping container controller-manager

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-7dd9c7d7b9 to 1

openshift-controller-manager

replicaset-controller

controller-manager-7657d7494

SuccessfulCreate

Created pod: controller-manager-7657d7494-mmsz6

openshift-kube-scheduler

kubelet

installer-1-retry-1-master-0

Started

Started container installer

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_2788a85b-5d25-4a18-896d-8c6812307b51 became leader

openshift-kube-scheduler

kubelet

installer-1-retry-1-master-0

Created

Created container: installer

openshift-cluster-machine-approver

replicaset-controller

machine-approver-7dd9c7d7b9

SuccessfulCreate

Created pod: machine-approver-7dd9c7d7b9-pb6sw

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e"

openshift-machine-api

multus

cluster-baremetal-operator-d6bb9bb76-54hnv

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-machine-api

multus

machine-api-operator-5c7cf458b4-65mc5

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

multus

cluster-autoscaler-operator-86b8dc6d6-mcf2z

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-cluster-storage-operator

multus

cluster-storage-operator-f94476f49-tlmg5

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-route-controller-manager

multus

route-controller-manager-654dcf5585-fgmnd

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6"

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

Started

Started container kube-rbac-proxy

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-tlmg5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75"

openshift-machine-api

multus

machine-api-operator-5c7cf458b4-65mc5

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

Started

Started container kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

Created

Created container: kube-rbac-proxy

openshift-machine-api

multus

cluster-autoscaler-operator-86b8dc6d6-mcf2z

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

Created

Created container: kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cloud-credential-operator

multus

cloud-credential-operator-6968c58f46-68rth

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6"

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed"

openshift-machine-api

multus

cluster-baremetal-operator-d6bb9bb76-54hnv

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6"

openshift-operator-lifecycle-manager

multus

olm-operator-5499d7f7bb-8xdmq

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126"

openshift-config-operator

multus

openshift-config-operator-6f47d587d6-7b87v

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

multus

catalog-operator-596f79dd6f-v22h2

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-cluster-samples-operator

multus

cluster-samples-operator-65c5c48b9b-hmlsl

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-v22h2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

requirements not yet checked

openshift-cluster-machine-approver

master-0_1f407694-7d5a-4cc9-a3a7-788fd0634e16

cluster-machine-approver-leader

LeaderElection

master-0_1f407694-7d5a-4cc9-a3a7-788fd0634e16 became leader

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6"

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Created

Created container: kube-rbac-proxy

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-654dcf5585-fgmnd_593731ee-84ca-48fe-9b83-2c624b698614 became leader

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-v22h2

Created

Created container: catalog-operator

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Started

Started container kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-v22h2

Started

Started container catalog-operator

openshift-insights

kubelet

insights-operator-59b498fcfb-mprnx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c"

openshift-insights

multus

insights-operator-59b498fcfb-mprnx

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6"

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34"

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-8xdmq

Started

Started container olm-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-8xdmq

Created

Created container: olm-operator

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34"

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-8xdmq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-7657d7494-mmsz6 became leader

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Started

Started container kube-rbac-proxy

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-t98nr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721"

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

Created

Created container: kube-rbac-proxy

openshift-image-registry

multus

cluster-image-registry-operator-779979bdf7-t98nr

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

multus

machine-config-operator-7f8c75f984-922md

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-controller-manager

multus

controller-manager-7657d7494-mmsz6

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.33} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf}]

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-df5f88cd4

SuccessfulCreate

Created pod: packageserver-df5f88cd4-cwzcs

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-df5f88cd4 to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-c56dz

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well")

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-cbd75ff8d to 0 from 1

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-cbd75ff8d

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" in 14.778s (14.778s including waiting). Image size: 456273550 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" in 15.019s (15.019s including waiting). Image size: 470717179 bytes.

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126" in 14.747s (14.747s including waiting). Image size: 438548891 bytes.

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" in 14.778s (14.778s including waiting). Image size: 456273550 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6" in 14.71s (14.71s including waiting). Image size: 455311777 bytes.

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-tlmg5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75" in 14.991s (14.991s including waiting). Image size: 513473308 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" in 15.019s (15.019s including waiting). Image size: 470717179 bytes.

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-t98nr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721" in 14.811s (14.811s including waiting). Image size: 548646306 bytes.

openshift-marketplace

kubelet

certified-operators-gn8m8

Created

Created container: extract-utilities

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c"

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" in 14.908s (14.908s including waiting). Image size: 862091954 bytes.

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-54hnv_d1095d9a-4760-43c5-b9a6-5d390768b8e5

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-54hnv_d1095d9a-4760-43c5-b9a6-5d390768b8e5 became leader

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Started

Started container baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Created

Created container: baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mcf2z_5f0d7be2-2b1f-4b47-9be7-1489be20d2c0

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-86b8dc6d6-mcf2z_5f0d7be2-2b1f-4b47-9be7-1489be20d2c0 became leader

openshift-insights

kubelet

insights-operator-59b498fcfb-mprnx

Started

Started container insights-operator

openshift-insights

kubelet

insights-operator-59b498fcfb-mprnx

Created

Created container: insights-operator

openshift-insights

kubelet

insights-operator-59b498fcfb-mprnx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c" in 14.58s (14.58s including waiting). Image size: 504558291 bytes.

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-779979bdf7-t98nr_7de7bd49-4ac2-4b11-a9e9-19472c78c171 became leader

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

master-0_05ead80f-a720-48b3-8de6-d96c5334a774

cluster-cloud-controller-manager-leader

LeaderElection

master-0_05ead80f-a720-48b3-8de6-d96c5334a774 became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" in 15.625s (15.626s including waiting). Image size: 557320737 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Started

Started container cluster-cloud-controller-manager

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

Started

Started container machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

Created

Created container: machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mcf2z_5f0d7be2-2b1f-4b47-9be7-1489be20d2c0

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-86b8dc6d6-mcf2z_5f0d7be2-2b1f-4b47-9be7-1489be20d2c0 became leader

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed" in 15.182s (15.182s including waiting). Image size: 880247193 bytes.

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

Created

Created container: cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

Started

Started container cloud-credential-operator

openshift-operator-lifecycle-manager

kubelet

packageserver-df5f88cd4-cwzcs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

multus

packageserver-df5f88cd4-cwzcs

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

Created

Created container: cluster-samples-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

Started

Started container cluster-samples-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Created

Created container: baremetal-kube-rbac-proxy

openshift-marketplace

multus

certified-operators-gn8m8

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-gn8m8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

certified-operators-gn8m8

Started

Started container extract-utilities

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6" already present on machine

openshift-marketplace

multus

community-operators-68vwc

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-68vwc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

community-operators-68vwc

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-68vwc

Started

Started container extract-utilities

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" in 14.908s (14.908s including waiting). Image size: 862091954 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

Created

Created container: cluster-samples-operator-watch

openshift-marketplace

multus

redhat-marketplace-v64s6

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Started

Started container extract-utilities

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

Started

Started container cluster-samples-operator-watch

openshift-marketplace

multus

redhat-operators-xm8sw

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-xm8sw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Started

Started container baremetal-kube-rbac-proxy

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-54hnv_d1095d9a-4760-43c5-b9a6-5d390768b8e5

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-54hnv_d1095d9a-4760-43c5-b9a6-5d390768b8e5 became leader

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Created

Created container: openshift-api

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Started

Started container openshift-api

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Killing

Stopping container kube-rbac-proxy

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.33

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

master-0_78bf167e-39c8-4da1-8a32-7b09f3890009

cluster-cloud-config-sync-leader

LeaderElection

master-0_78bf167e-39c8-4da1-8a32-7b09f3890009 became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Started

Started container config-sync-controllers

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-marketplace

kubelet

redhat-operators-xm8sw

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.33"

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-marketplace

kubelet

redhat-operators-xm8sw

Started

Started container extract-utilities

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-f94476f49-tlmg5_44adcf55-a81e-4ead-acc5-bc5e94a58e6c became leader

openshift-marketplace

kubelet

redhat-operators-xm8sw

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

community-operators-68vwc

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-gn8m8

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Started

Started container kube-rbac-proxy

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Killing

Stopping container cluster-cloud-controller-manager

openshift-operator-lifecycle-manager

kubelet

packageserver-df5f88cd4-cwzcs

Created

Created container: packageserver

openshift-operator-lifecycle-manager

kubelet

packageserver-df5f88cd4-cwzcs

Started

Started container packageserver

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq

Killing

Stopping container config-sync-controllers

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-67dd8d7969 to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-67dd8d7969

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" in 3.497s (3.497s including waiting). Image size: 495888162 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-54cb48566c to 1
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.33"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2026-02-24 05:19:24 +0000 UTC AsExpected } {OperatorProgressing False 2026-02-24 05:19:24 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-02-24 05:19:24 +0000 UTC AsExpected }]

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.33"} {"operator" "4.18.33"}]

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")

openshift-machine-config-operator

replicaset-controller

machine-config-controller-54cb48566c

SuccessfulCreate

Created pod: machine-config-controller-54cb48566c-9ww5z
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.33"

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-6f47d587d6-7b87v_894a1b2e-1395-46a7-a274-27c9a81f6078 became leader

openshift-machine-config-operator

multus

machine-config-controller-54cb48566c-9ww5z

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Created

Created container: kube-rbac-proxy

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143"

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

Started

Started container kube-rbac-proxy

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-network-diagnostics

kubelet

network-check-source-58fb6744f5-kn2z7

Started

Started container check-endpoints

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531835-tsgrz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-network-diagnostics

kubelet

network-check-source-58fb6744f5-kn2z7

Created

Created container: check-endpoints

openshift-network-diagnostics

kubelet

network-check-source-58fb6744f5-kn2z7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29531835-tsgrz

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-network-diagnostics

multus

network-check-source-58fb6744f5-kn2z7

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350"

openshift-monitoring

multus

prometheus-operator-admission-webhook-75d56db95f-hw4m2

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531835-tsgrz

Started

Started container collect-profiles

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350"

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531835-tsgrz

Created

Created container: collect-profiles

openshift-monitoring

multus

prometheus-operator-admission-webhook-75d56db95f-hw4m2

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-xxl55

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-df0cb9d27f56b272338d64d5b97d8502 successfully generated (release version: 4.18.33, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-91b2d9ccaeecfd4381c3009bde309b20 successfully generated (release version: 4.18.33, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-df0cb9d27f56b272338d64d5b97d8502

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-df0cb9d27f56b272338d64d5b97d8502

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

static-pod-installer

installer-1-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.33"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"operator" "4.18.33"} {"kube-scheduler" "1.31.14"}]

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531835

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531835, condition: Complete

openshift-marketplace

kubelet

community-operators-68vwc

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 29.169s (29.169s including waiting). Image size: 1210563790 bytes.

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" in 23.684s (23.684s including waiting). Image size: 444471741 bytes.

openshift-machine-config-operator

kubelet

machine-config-server-xxl55

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143" in 24.14s (24.14s including waiting). Image size: 487054953 bytes.

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" in 23.684s (23.684s including waiting). Image size: 444471741 bytes.

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 29.272s (29.272s including waiting). Image size: 1202767548 bytes.

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Started

Started container extract-content

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Started

Started container prometheus-operator-admission-webhook

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Created

Created container: prometheus-operator-admission-webhook

openshift-marketplace

kubelet

community-operators-68vwc

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-68vwc

Created

Created container: extract-content

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Started

Started container prometheus-operator-admission-webhook

openshift-marketplace

kubelet

redhat-operators-xm8sw

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Created

Created container: extract-content

openshift-machine-config-operator

kubelet

machine-config-server-xxl55

Started

Started container machine-config-server

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

Created

Created container: router

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-hw4m2

Created

Created container: prometheus-operator-admission-webhook

openshift-machine-config-operator

kubelet

machine-config-server-xxl55

Created

Created container: machine-config-server

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-marketplace

kubelet

certified-operators-gn8m8

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-gn8m8

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-gn8m8

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 29.347s (29.347s including waiting). Image size: 1238591178 bytes.

openshift-marketplace

kubelet

redhat-operators-xm8sw

Created

Created container: extract-content

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

Started

Started container router

openshift-marketplace

kubelet

redhat-operators-xm8sw

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 29.33s (29.33s including waiting). Image size: 1703852494 bytes.

openshift-marketplace

kubelet

redhat-operators-xm8sw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_887e9a01-76a9-4aba-8276-8e357ebb0e69 became leader

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-gn8m8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

certified-operators-gn8m8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 467ms (467ms including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

certified-operators-gn8m8

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-gn8m8

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-68vwc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

community-operators-68vwc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 449ms (449ms including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

community-operators-68vwc

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-68vwc

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 624ms (624ms including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

redhat-marketplace-v64s6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

redhat-operators-xm8sw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 474ms (474ms including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

redhat-operators-xm8sw

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-xm8sw

Started

Started container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-754bc4d665 to 1

openshift-monitoring

replicaset-controller

prometheus-operator-754bc4d665

SuccessfulCreate

Created pod: prometheus-operator-754bc4d665-xjddh

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

replicaset-controller

prometheus-operator-754bc4d665

SuccessfulCreate

Created pod: prometheus-operator-754bc4d665-xjddh

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-754bc4d665 to 1

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.33} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf}]

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285"

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

multus

prometheus-operator-754bc4d665-xjddh

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-monitoring

multus

prometheus-operator-754bc4d665-xjddh

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" in 1.646s (1.646s including waiting). Image size: 461468192 bytes.

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" in 1.646s (1.646s including waiting). Image size: 461468192 bytes.

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

Started

Started container kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

kube-state-metrics-59584d565f

SuccessfulCreate

Created pod: kube-state-metrics-59584d565f-gsgxz

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-qk7rz

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-6dbff8cb4c to 1

openshift-monitoring

replicaset-controller

openshift-state-metrics-6dbff8cb4c

SuccessfulCreate

Created pod: openshift-state-metrics-6dbff8cb4c-hvjlk

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-59584d565f to 1

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-qk7rz

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

openshift-state-metrics-6dbff8cb4c

SuccessfulCreate

Created pod: openshift-state-metrics-6dbff8cb4c-hvjlk

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

replicaset-controller

kube-state-metrics-59584d565f

SuccessfulCreate

Created pod: kube-state-metrics-59584d565f-gsgxz

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-59584d565f to 1

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-6dbff8cb4c to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

multus

openshift-state-metrics-6dbff8cb4c-hvjlk

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d"

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708"

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d"

openshift-monitoring

multus

kube-state-metrics-59584d565f-gsgxz

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

multus

openshift-state-metrics-6dbff8cb4c-hvjlk

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

multus

kube-state-metrics-59584d565f-gsgxz

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-df0cb9d27f56b272338d64d5b97d8502 and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-df0cb9d27f56b272338d64d5b97d8502 to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-df0cb9d27f56b272338d64d5b97d8502

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-qk7rz

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-qk7rz

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-qk7rz

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" in 1.058s (1.058s including waiting). Image size: 417586222 bytes.
(x10)

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" in 1.058s (1.058s including waiting). Image size: 417586222 bytes.

openshift-marketplace

kubelet

redhat-operators-xm8sw

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-monitoring

kubelet

node-exporter-qk7rz

Started

Started container init-textfile

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" already present on machine

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" already present on machine

openshift-monitoring

kubelet

node-exporter-qk7rz

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-qk7rz

Started

Started container node-exporter

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-qk7rz

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-qk7rz

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-qk7rz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

node-exporter-qk7rz

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-qk7rz

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-qk7rz

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-qk7rz

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" in 3.519s (3.519s including waiting). Image size: 431873347 bytes.

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" in 3.833s (3.833s including waiting). Image size: 440450463 bytes.

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" in 3.519s (3.519s including waiting). Image size: 431873347 bytes.

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" in 3.833s (3.833s including waiting). Image size: 440450463 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-7qtvbjhkqad41 -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

metrics-server-65cdf565cd

SuccessfulCreate

Created pod: metrics-server-65cdf565cd-555rj

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-65cdf565cd to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-65cdf565cd to 1

openshift-monitoring

replicaset-controller

metrics-server-65cdf565cd

SuccessfulCreate

Created pod: metrics-server-65cdf565cd-555rj

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-7qtvbjhkqad41 -n openshift-monitoring because it was missing

openshift-monitoring

multus

metrics-server-65cdf565cd-555rj

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb"

openshift-monitoring

multus

metrics-server-65cdf565cd-555rj

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb"

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" in 1.806s (1.806s including waiting). Image size: 471325816 bytes.

openshift-network-node-identity

master-0_3b1ba016-8ae4-44f2-917a-7f39f51a887e

ovnkube-identity

LeaderElection

master-0_3b1ba016-8ae4-44f2-917a-7f39f51a887e became leader

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" in 1.806s (1.806s including waiting). Image size: 471325816 bytes.

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Created

Created container: metrics-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:11.993986 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083227 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:12.083331 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.083349 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:12.095946 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:42.096030 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:15:56.100613 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 1"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-controller

operator-controller-controller-manager-9cc7d7bb-t75jj_26617f81-bf3e-4f9a-948e-f646e35e4121

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-9cc7d7bb-t75jj_26617f81-bf3e-4f9a-948e-f646e35e4121 became leader

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs_bd2f429d-b02d-413f-a8e3-09ac0de503c5

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-zvzxs_bd2f429d-b02d-413f-a8e3-09ac0de503c5 became leader

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs_bd2f429d-b02d-413f-a8e3-09ac0de503c5

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-zvzxs_bd2f429d-b02d-413f-a8e3-09ac0de503c5 became leader

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt_69a2dd41-4dda-4082-ab2b-f3f24dadf550

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-zzvtt_69a2dd41-4dda-4082-ab2b-f3f24dadf550 became leader

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt_69a2dd41-4dda-4082-ab2b-f3f24dadf550

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-zzvtt_69a2dd41-4dda-4082-ab2b-f3f24dadf550 became leader

openshift-cloud-controller-manager-operator

master-0_5af1da9a-b042-465a-9a32-c34cf5d55ad1

cluster-cloud-config-sync-leader

LeaderElection

master-0_5af1da9a-b042-465a-9a32-c34cf5d55ad1 became leader

openshift-cloud-controller-manager-operator

master-0_9c1f962f-4f4e-42a2-bd79-67591a366c01

cluster-cloud-controller-manager-leader

LeaderElection

master-0_9c1f962f-4f4e-42a2-bd79-67591a366c01 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-5m82s

openshift-ingress-canary

kubelet

ingress-canary-5m82s

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-5m82s

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-5m82s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" already present on machine

openshift-ingress-canary

multus

ingress-canary-5m82s

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x3)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" already present on machine
(x4)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Started

Started container ingress-operator
(x4)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Created

Created container: ingress-operator

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-545bf96f4d-tfmbs_1b1ddf53-bb6e-4284-9e25-f333054172f1 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-fc889cfd5-r6p58_02665d20-db79-480c-a7d6-2dae5f3c81ff became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced")

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-5d87bf58c-ncrqj_cd56747c-1fcc-4897-b504-af03c5e9b3f6 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-7bcfbc574b-8zrj9_34e5c284-c4bf-4122-b71d-8fd2a6eb9efc became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config-2 -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0224 05:15:19.485062 1 cmd.go:413] Getting controller reference for node master-0 I0224 05:15:19.498205 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0224 05:15:19.498272 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0224 05:15:19.498285 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0224 05:15:19.501443 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0224 05:15:49.501962 1 cmd.go:524] Getting installer pods for node master-0 F0224 05:16:03.502607 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:19.485062 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:19.498205 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:19.498272 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:19.498285 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:19.501443 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:49.501962 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:16:03.502607 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 05:15:19.485062 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 05:15:19.498205 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 05:15:19.498272 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 05:15:19.498285 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 05:15:19.501443 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 05:15:49.501962 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 05:16:03.502607 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-c48c8bf7c-mcdrl_53932718-0cea-4484-8281-4a1b085449b2 became leader

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_119cc7ee-386b-446c-9ad2-9f3be7cf3b97 became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-j28p2

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-j28p2

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-96c995bf5 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-multus

replicaset-controller

multus-admission-controller-5f54bf67d4

SuccessfulCreate

Created pod: multus-admission-controller-5f54bf67d4-5tf9t

openshift-monitoring

replicaset-controller

telemeter-client-96c995bf5

SuccessfulCreate

Created pod: telemeter-client-96c995bf5-57k8x

openshift-monitoring

replicaset-controller

telemeter-client-96c995bf5

SuccessfulCreate

Created pod: telemeter-client-96c995bf5-57k8x

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f54bf67d4 to 1

openshift-multus

replicaset-controller

multus-admission-controller-5f54bf67d4

SuccessfulCreate

Created pod: multus-admission-controller-5f54bf67d4-5tf9t

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f54bf67d4 to 1

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-96c995bf5 to 1

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Created

Created container: multus-admission-controller

openshift-multus

multus

multus-admission-controller-5f54bf67d4-5tf9t

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Created

Created container: multus-admission-controller

openshift-monitoring

multus

telemeter-client-96c995bf5-57k8x

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" already present on machine

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c"

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" already present on machine

openshift-multus

multus

multus-admission-controller-5f54bf67d4-5tf9t

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c"

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

Started

Started container kube-rbac-proxy

openshift-monitoring

multus

telemeter-client-96c995bf5-57k8x

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.33"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.33"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

static-pod-installer

installer-2-master-0

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" in 1.803s (1.803s including waiting). Image size: 480427687 bytes.

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe"

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" in 1.803s (1.803s including waiting). Image size: 480427687 bytes.

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Started

Started container telemeter-client

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_a6f81d94-a011-479a-9ffc-1e316c6d32c3 became leader

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" in 1.622s (1.622s including waiting). Image size: 437808562 bytes.

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" in 1.622s (1.622s including waiting). Image size: 437808562 bytes.

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

Started

Started container kube-rbac-proxy
(x16)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10
(x2)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1
(x2)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-j28p2

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl
(x2)

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Started

Started container approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Created

Created container: approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-rlg4x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup
(x3)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Created

Created container: authentication-operator
(x3)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Started

Started container authentication-operator
(x2)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" already present on machine
(x2)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-dbsnm

BackOff

Back-off restarting failed container marketplace-operator in pod marketplace-operator-6f5488b997-dbsnm_openshift-marketplace(dd29bef3-d27e-48b3-9aa0-d915e949b3d5)

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Created

Created container: config-sync-controllers
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Started

Started container config-sync-controllers
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Created

Created container: cluster-cloud-controller-manager
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

Started

Started container cluster-cloud-controller-manager
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-t75jj

BackOff

Back-off restarting failed container manager in pod operator-controller-controller-manager-9cc7d7bb-t75jj_openshift-operator-controller(347c43e5-86d5-436f-bdc5-1c7bbe19ab2a)
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

BackOff

Back-off restarting failed container manager in pod catalogd-controller-manager-84b8d9d697-zvzxs_openshift-catalogd(d9492fbf-d0f4-4ecf-84ba-b089d69535c1)
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

BackOff

Back-off restarting failed container manager in pod catalogd-controller-manager-84b8d9d697-zvzxs_openshift-catalogd(d9492fbf-d0f4-4ecf-84ba-b089d69535c1)

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine
(x6)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Unhealthy

Liveness probe failed: Get "https://10.128.0.17:8443/healthz": dial tcp 10.128.0.17:8443: connect: connection refused
(x6)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

ProbeError

Liveness probe error: Get "https://10.128.0.17:8443/healthz": dial tcp 10.128.0.17:8443: connect: connection refused body:
(x2)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

Killing

Container authentication-operator failed liveness probe, will be restarted

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

BackOff

Back-off restarting failed container ovnkube-cluster-manager in pod ovnkube-control-plane-5d8dfcdc87-b8ght_openshift-ovn-kubernetes(88b915ff-fd94-4998-aa09-70f95c0f1b8a)
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Created

Created container: manager
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Started

Started container manager
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Created

Created container: manager
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-zvzxs

Started

Started container manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Started

Started container ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Created

Created container: ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-b8ght

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-7657d7494-mmsz6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-7657d7494-mmsz6

Created

Created container: controller-manager
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

Started

Started container machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

Created

Created container: machine-approver-controller
(x2)

openshift-controller-manager

kubelet

controller-manager-7657d7494-mmsz6

Started

Started container controller-manager

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

BackOff

Back-off restarting failed container control-plane-machine-set-operator in pod control-plane-machine-set-operator-686847ff5f-zzvtt_openshift-machine-api(32fd577d-8966-4ab1-95cf-357291084156)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

BackOff

Back-off restarting failed container control-plane-machine-set-operator in pod control-plane-machine-set-operator-686847ff5f-zzvtt_openshift-machine-api(32fd577d-8966-4ab1-95cf-357291084156)
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" already present on machine
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Created

Created container: control-plane-machine-set-operator
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Started

Started container control-plane-machine-set-operator
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Started

Started container control-plane-machine-set-operator
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Created

Created container: control-plane-machine-set-operator
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-zzvtt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" already present on machine
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container cluster-policy-controller failed startup probe, will be restarted
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Unhealthy

Liveness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

ProbeError

Liveness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

ProbeError

Readiness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-vqn96

Started

Started container snapshot-controller
(x4)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-vqn96

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" already present on machine
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-vqn96

Created

Created container: snapshot-controller
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

Started

Started container machine-config-controller
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

Created

Created container: machine-config-controller
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine
(x2)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Created

Created container: openshift-config-operator
(x2)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Started

Started container openshift-config-operator
(x3)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

Created

Created container: etcd-operator
(x3)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

Started

Started container etcd-operator
(x2)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-tfmbs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine
(x4)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Created

Created container: kube-storage-version-migrator-operator
(x4)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Started

Started container kube-storage-version-migrator-operator
(x3)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-ncrqj

Created

Created container: kube-apiserver-operator
(x3)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-ncrqj

Started

Started container kube-apiserver-operator
(x3)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-ncrqj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine
(x3)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-r6p58

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" already present on machine
(x4)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-8zrj9

Created

Created container: kube-controller-manager-operator
(x4)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-8zrj9

Started

Started container kube-controller-manager-operator
(x3)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-8zrj9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine
(x4)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Unhealthy

Readiness probe failed: Get "https://10.128.0.57:8443/healthz": dial tcp 10.128.0.57:8443: connect: connection refused
(x4)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

ProbeError

Readiness probe error: Get "https://10.128.0.57:8443/healthz": dial tcp 10.128.0.57:8443: connect: connection refused body:
(x3)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

ProbeError

Liveness probe error: Get "https://10.128.0.57:8443/healthz": dial tcp 10.128.0.57:8443: connect: connection refused body:

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Killing

Container openshift-config-operator failed liveness probe, will be restarted
(x3)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Unhealthy

Liveness probe failed: Get "https://10.128.0.57:8443/healthz": dial tcp 10.128.0.57:8443: connect: connection refused
(x2)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" already present on machine
(x10)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Created

Created container: csi-snapshot-controller-operator
(x3)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Created

Created container: kube-scheduler-operator-container
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Started

Started container openshift-apiserver-operator
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Created

Created container: openshift-apiserver-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Started

Started container cluster-node-tuning-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Created

Created container: cluster-node-tuning-operator
(x2)

openshift-service-ca

kubelet

service-ca-576b4d78bd-fsmrl

Started

Started container service-ca-controller
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Created

Created container: cluster-node-tuning-operator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7" already present on machine
(x2)

openshift-service-ca

kubelet

service-ca-576b4d78bd-fsmrl

Created

Created container: service-ca-controller
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Started

Started container cluster-olm-operator
(x2)

openshift-service-ca

kubelet

service-ca-576b4d78bd-fsmrl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" already present on machine
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Created

Created container: openshift-controller-manager-operator
(x3)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Started

Started container kube-scheduler-operator-container

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-49fsv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" already present on machine
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-qh6j7

Created

Created container: cluster-olm-operator
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8l7xv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" already present on machine

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" already present on machine
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Started

Started container cluster-node-tuning-operator
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-8tv99

Started

Started container csi-snapshot-controller-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-h99t4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-zz9fm

Started

Started container openshift-controller-manager-operator

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_1293ba9c-1ae7-48e0-aa2c-70b2f10421eb stopped leading

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" already present on machine
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-d6bb9bb76-54hnv_openshift-machine-api(39623346-691b-42c8-af76-409d4f6629af)

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-7657d7494-mmsz6 became leader
(x3)

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" already present on machine
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-654dcf5585-fgmnd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" already present on machine

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-tlmg5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75" already present on machine

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_477a36a0-4269-46c7-8ab3-5719a01be40d became leader

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-t98nr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721" already present on machine

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5d8dfcdc87-b8ght became leader
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-d6bb9bb76-54hnv_openshift-machine-api(39623346-691b-42c8-af76-409d4f6629af)

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-mcdrl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" already present on machine

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" already present on machine

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine
(x2)

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Started

Started container machine-api-operator
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Created

Created container: package-server-manager
(x3)

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

Started

Started container network-operator
(x2)

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Started

Started container machine-api-operator
(x2)

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Created

Created container: machine-api-operator
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-654dcf5585-fgmnd

Created

Created container: route-controller-manager
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Started

Started container cluster-autoscaler-operator
(x2)

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

Created

Created container: machine-api-operator
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Created

Created container: cluster-autoscaler-operator
(x3)

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-mcdrl

Created

Created container: service-ca-operator

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-5f98f4f8d5 to 0 from 1

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulDelete

Deleted pod: multus-admission-controller-5f98f4f8d5-b985k
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Started

Started container cluster-autoscaler-operator
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-654dcf5585-fgmnd

Started

Started container route-controller-manager
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

Created

Created container: machine-config-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

Started

Started container machine-config-operator
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

Created

Created container: cluster-autoscaler-operator
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-tlmg5

Started

Started container cluster-storage-operator
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-t98nr

Started

Started container cluster-image-registry-operator
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-t98nr

Created

Created container: cluster-image-registry-operator

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-5f98f4f8d5 to 0 from 1

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulDelete

Deleted pod: multus-admission-controller-5f98f4f8d5-b985k

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Killing

Stopping container multus-admission-controller
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-tlmg5

Created

Created container: cluster-storage-operator

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Killing

Stopping container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-b985k

Killing

Stopping container kube-rbac-proxy
(x3)

openshift-network-operator

kubelet

network-operator-7d7db75979-4fk6k

Created

Created container: network-operator
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-9d82f

Started

Started container package-server-manager
(x3)

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-mcdrl

Started

Started container service-ca-operator

openshift-operator-lifecycle-manager

package-server-manager-5c75f78c8b-9d82f_e16280a4-39f0-4289-b59a-bf71095fd0cf

packageserver-controller-lock

LeaderElection

package-server-manager-5c75f78c8b-9d82f_e16280a4-39f0-4289-b59a-bf71095fd0cf became leader
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" already present on machine
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Created

Created container: cluster-baremetal-operator
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Created

Created container: cluster-baremetal-operator
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Started

Started container cluster-baremetal-operator
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Started

Started container cluster-baremetal-operator
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-54hnv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" already present on machine

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mcf2z_950561b0-ff89-414b-9265-e293eaa92224

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-86b8dc6d6-mcf2z_950561b0-ff89-414b-9265-e293eaa92224 became leader

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-c48c8bf7c-mcdrl_666ecab4-818c-4a87-b910-d62a9dd90ac3 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-779979bdf7-t98nr_c55624ea-f961-4df6-8cd6-f44ca00bfe7a became leader

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mcf2z_950561b0-ff89-414b-9265-e293eaa92224

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-86b8dc6d6-mcf2z_950561b0-ff89-414b-9265-e293eaa92224 became leader

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-654dcf5585-fgmnd_6c6f0924-17c8-48ed-a80c-fd218f96242c became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_ac780fb7-1bdb-46ac-aa14-5432c5915caa became leader

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-f94476f49-tlmg5_893b953b-c4ac-46c4-980b-3e5f9378b002 became leader

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-jvrlq

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-jvrlq

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-57476485-7g2gq

Created

Created container: cluster-version-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-57476485-7g2gq

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-57476485-7g2gq

Started

Started container cluster-version-operator
(x6)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-kbb8z

BackOff

Back-off restarting failed container authentication-operator in pod authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531850

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531850

SuccessfulCreate

Created pod: collect-profiles-29531850-l54gb
(x12)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-vqn96

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6847bb4785-vqn96

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6847bb4785-vqn96 became leader

openshift-network-node-identity

master-0_12433c5b-1c3f-45d7-85e5-fff80db477f6

ovnkube-identity

LeaderElection

master-0_12433c5b-1c3f-45d7-85e5-fff80db477f6 became leader

openshift-cloud-controller-manager-operator

master-0_a5f459ed-dea1-47af-8802-8b869da460cb

cluster-cloud-controller-manager-leader

LeaderElection

master-0_a5f459ed-dea1-47af-8802-8b869da460cb became leader

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt_4e7c8064-08b0-4611-adce-571755e1706d

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-zzvtt_4e7c8064-08b0-4611-adce-571755e1706d became leader

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-zzvtt_4e7c8064-08b0-4611-adce-571755e1706d

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-zzvtt_4e7c8064-08b0-4611-adce-571755e1706d became leader

openshift-cluster-machine-approver

master-0_8db50bad-d327-4b31-91f3-3618c4199b9d

cluster-machine-approver-leader

LeaderElection

master-0_8db50bad-d327-4b31-91f3-3618c4199b9d became leader

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_490bed09-83b7-4c65-870c-57928e1aae81 became leader

openshift-operator-controller

operator-controller-controller-manager-9cc7d7bb-t75jj_e07657e9-28ad-4695-8de9-ad4834b16eb3

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-9cc7d7bb-t75jj_e07657e9-28ad-4695-8de9-ad4834b16eb3 became leader

openshift-cloud-controller-manager-operator

master-0_87aca953-60bd-462b-bb8a-07ce27d4dd8f

cluster-cloud-config-sync-leader

LeaderElection

master-0_87aca953-60bd-462b-bb8a-07ce27d4dd8f became leader

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs_441742c2-91cb-41c2-9213-8ed693ac4316

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-zvzxs_441742c2-91cb-41c2-9213-8ed693ac4316 became leader

openshift-catalogd

catalogd-controller-manager-84b8d9d697-zvzxs_441742c2-91cb-41c2-9213-8ed693ac4316

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-zvzxs_441742c2-91cb-41c2-9213-8ed693ac4316 became leader

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_8c593828-f161-48b7-ad97-a500dc570a63 became leader

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531850-l54gb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29531850-l54gb

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531850-l54gb

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531850-l54gb

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531850, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531850

Completed

Job completed

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-54hnv_35723509-24d1-4bc7-ba85-8f48965755c2

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-54hnv_35723509-24d1-4bc7-ba85-8f48965755c2 became leader

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-54hnv_35723509-24d1-4bc7-ba85-8f48965755c2

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-54hnv_35723509-24d1-4bc7-ba85-8f48965755c2 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-584cc7bcb5-zz9fm_e4a92f7b-0301-47cd-ba45-d571983c666a became leader
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-5bd7768f54-qh6j7_8c4215cb-94fb-4186-b1d8-813393da5896 became leader

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from False to True ("CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts catalogd-controller-manager)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: ")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts catalogd-controller-manager)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ScriptControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-scripts)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)")

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x3)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from True to False ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-545bf96f4d-tfmbs_9b0fe17c-0a37-4a99-a4c5-c1aa41b3b57b became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-scripts)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found")
(x645)

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2\nEtcdMembersAvailable: 1 members are available"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-8586dccc9b-49fsv_baf6df41-52ea-48f0-b029-72eaf0934655 became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling
(x26)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-77cd4d9559-8l7xv_c7352c55-3bcc-472f-90ef-aa0bc2b797ca became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:sa-listing-configmaps)\nKubeControllerManagerStaticResourcesDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:sa-listing-configmaps)\nKubeControllerManagerStaticResourcesDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:sa-listing-configmaps)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-fc889cfd5-r6p58_b84662f7-ee05-400f-95b3-e881415a5456 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_abfc98b1-d646-47d9-8e90-e13a9929137a became leader

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64"

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4_ece0a331-c757-484b-a850-500f53f2c2c2

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-h99t4_ece0a331-c757-484b-a850-500f53f2c2c2 became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-h99t4_ece0a331-c757-484b-a850-500f53f2c2c2

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-h99t4_ece0a331-c757-484b-a850-500f53f2c2c2 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-6fb4df594f-8tv99_da0164a3-a5f0-42ae-a2f0-b57be11d57e9 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-6f47d587d6-7b87v_9508a01b-f818-4707-89c6-e2699c5a1fc4 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-576b4d78bd-fsmrl_8ebc1dbc-e0dc-4f59-ba14-9ae7dfa55288 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-7bcfbc574b-8zrj9_29476f92-3e65-4e9d-aec3-e4e340e2b306 became leader

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-5bd7c86784-kbb8z_3b304b79-eeae-4ca6-b3e8-c836662463bf became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("SATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)")

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "SATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" to "SATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-5d87bf58c-ncrqj_dd543534-2ed0-4efd-8f4b-b5944e725485 became leader

openshift-kube-scheduler

static-pod-installer

installer-2-master-0

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-cert-syncer

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready")

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 2 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing
(x4)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NeedsReinstall

apiServices not installed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallCheckFailed

install timeout

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing
(x7)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: :24] Failed to get secret openshift-kube-apiserver/service-network-serving-certkey: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0224 05:25:17.228206 1 copy.go:24] Failed to get secret openshift-kube-apiserver/service-network-serving-certkey: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 05:25:31.228940 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-2-master-0.18971768ca1985a3.f857d4b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-2-master-0,UID:7d063f48-5f89-47d0-bafc-84a52839c806,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-24 05:25:17.228287395 +0000 UTC m=+90.802913430,LastTimestamp:2026-02-24 05:25:17.228287395 +0000 UTC m=+90.802913430,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0224 05:25:31.229279 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: :24] Failed to get secret openshift-kube-apiserver/service-network-serving-certkey: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) I0224 05:25:17.228206 1 copy.go:24] Failed to get secret openshift-kube-apiserver/service-network-serving-certkey: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0224 05:25:31.228940 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-2-master-0.18971768ca1985a3.f857d4b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-2-master-0,UID:7d063f48-5f89-47d0-bafc-84a52839c806,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-24 05:25:17.228287395 +0000 UTC m=+90.802913430,LastTimestamp:2026-02-24 05:25:17.228287395 +0000 UTC m=+90.802913430,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/events?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0224 05:25:31.229279 1 cmd.go:109] failed to copy: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_7bdef853-051d-4bf4-8e8d-7843d9c6b842 became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

cert-recovery-controller

openshift-kube-scheduler

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": tls: failed to verify certificate: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, not localhost-recovery

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-2-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

multus

installer-2-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-2-retry-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-2-retry-1-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: :24] Failed to get secret openshift-kube-apiserver/service-network-serving-certkey: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0224 05:25:17.228206 1 copy.go:24] Failed to get secret openshift-kube-apiserver/service-network-serving-certkey: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 05:25:31.228940 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-2-master-0.18971768ca1985a3.f857d4b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-2-master-0,UID:7d063f48-5f89-47d0-bafc-84a52839c806,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-24 05:25:17.228287395 +0000 UTC m=+90.802913430,LastTimestamp:2026-02-24 05:25:17.228287395 +0000 UTC m=+90.802913430,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0224 05:25:31.229279 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/service-network-serving-certkey?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-apiserver

kubelet

installer-2-retry-1-master-0

Killing

Stopping container installer

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_20627f1c-8b27-4121-92ce-8f5a965e833f became leader

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cert-recovery-controller

openshift-kube-controller-manager

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": tls: failed to verify certificate: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, not localhost-recovery

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 2 to 3 because static pod is ready

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_ef27d5b7-4eeb-4d5d-8613-99fb01375161 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller
(x17)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

default

kubelet

master-0

Starting

Starting kubelet.

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-machine-config-operator

kubelet

machine-config-server-xxl55

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-7b87v

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59b498fcfb-mprnx

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-654dcf5585-fgmnd

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-654dcf5585-fgmnd

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-df5f88cd4-cwzcs

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-df5f88cd4-cwzcs

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-hmlsl

FailedMount

MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-922md

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-8xdmq

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-xxl55

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-tlmg5

FailedMount

MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59b498fcfb-mprnx

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-9ww5z

FailedMount

MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-qk7rz

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

FailedMount

MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-68rth

FailedMount

MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-qk7rz

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-qk7rz

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-qk7rz

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mcf2z

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-7657d7494-mmsz6

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-qk7rz

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-qk7rz

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-t98nr

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-pb6sw

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-t98nr

FailedMount

MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59b498fcfb-mprnx

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

FailedMount

MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-xjddh

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-c56dz

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-65mc5

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

EtcdEndpointsErrorUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-ingress-canary

kubelet

ingress-canary-5m82s

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "telemeter-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-hvjlk

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "federate-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-gsgxz

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-5tf9t

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "telemeter-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "secret-telemeter-client" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

telemeter-client-96c995bf5-57k8x

FailedMount

MountVolume.SetUp failed for volume "federate-client-tls" : failed to sync secret cache: timed out waiting for the condition
(x3)

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID
(x3)

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_cc5f9765-b602-4303-8447-5a725bf5c59d became leader
(x3)

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Started

Started container ingress-operator

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" already present on machine

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-rr8r7

Created

Created container: ingress-operator

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused" to "All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"operator" "4.18.33"} {"kube-apiserver" "1.31.14"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"
(x11)

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"
(x11)

openshift-ingress

kubelet

router-default-7b65dc9fcb-zxkt2

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-003.pub

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdateFailed

Failed to update ConfigMap/sa-token-signing-certs -n openshift-config-managed: Operation cannot be fulfilled on configmaps "sa-token-signing-certs": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on configmaps \"sa-token-signing-certs\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on configmaps \"sa-token-signing-certs\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-003.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6d4d899fc6 to 1

openshift-authentication

kubelet

oauth-openshift-6d4d899fc6-cgn6l

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-session" : secret "v4-0-config-system-session" not found

openshift-authentication

replicaset-controller

oauth-openshift-6d4d899fc6

SuccessfulCreate

Created pod: oauth-openshift-6d4d899fc6-cgn6l

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing
(x23)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.33"
(x23)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14"

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: ",Progressing changed from False to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing
(x4)

openshift-authentication

kubelet

oauth-openshift-6d4d899fc6-cgn6l

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-5df5ffc47c to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

replicaset-controller

console-operator-5df5ffc47c

SuccessfulCreate

Created pod: console-operator-5df5ffc47c-s22jd

openshift-authentication

multus

oauth-openshift-6d4d899fc6-cgn6l

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-console-operator

multus

console-operator-5df5ffc47c-s22jd

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-console-operator

kubelet

console-operator-5df5ffc47c-s22jd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b"

openshift-authentication

kubelet

oauth-openshift-6d4d899fc6-cgn6l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-755c6d6fd4 to 1

openshift-console-operator

kubelet

console-operator-5df5ffc47c-s22jd

Started

Started container console-operator

openshift-authentication

kubelet

oauth-openshift-6d4d899fc6-cgn6l

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" in 3.56s (3.56s including waiting). Image size: 481353554 bytes.

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-755c6d6fd4 to 1

openshift-console-operator

kubelet

console-operator-5df5ffc47c-s22jd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b" in 3.364s (3.364s including waiting). Image size: 512134379 bytes.

openshift-monitoring

replicaset-controller

monitoring-plugin-755c6d6fd4

SuccessfulCreate

Created pod: monitoring-plugin-755c6d6fd4-4ztmm

openshift-console-operator

kubelet

console-operator-5df5ffc47c-s22jd

Created

Created container: console-operator

openshift-monitoring

replicaset-controller

monitoring-plugin-755c6d6fd4

SuccessfulCreate

Created pod: monitoring-plugin-755c6d6fd4-4ztmm

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-authentication

kubelet

oauth-openshift-6d4d899fc6-cgn6l

Started

Started container oauth-openshift

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication

kubelet

oauth-openshift-6d4d899fc6-cgn6l

Created

Created container: oauth-openshift

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found"

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.33"

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b"

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-5df5ffc47c-s22jd_8fd9feaa-9d34-40ca-882c-f5e1ca09c904 became leader

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-955b69498 to 1
(x2)

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console

replicaset-controller

downloads-955b69498

SuccessfulCreate

Created pod: downloads-955b69498-crzjg

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

multus

monitoring-plugin-755c6d6fd4-4ztmm

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-monitoring

multus

monitoring-plugin-755c6d6fd4-4ztmm

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b"
(x7)

openshift-kube-apiserver

kubelet

installer-3-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_f088ee8e-7c3a-4ea0-9b45-3aac1c078e6f became leader

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" in 1.729s (1.729s including waiting). Image size: 447705420 bytes.

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console

multus

downloads-955b69498-crzjg

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-console

kubelet

downloads-955b69498-crzjg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085"

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" in 1.729s (1.729s including waiting). Image size: 447705420 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Created

Created container: monitoring-plugin

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Started

Started container monitoring-plugin

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-755c6d6fd4-4ztmm

Started

Started container monitoring-plugin

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_391b7da5-2463-478f-b6df-bd5b6d1a09e8 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "All is well"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7e373bb5"...)}},    "controllers": []any{    ... // 8 identical elements    string("openshift.io/deploymentconfig"),    string("openshift.io/image-import"),    strings.Join({ +  "-",    "openshift.io/image-puller-rolebindings",    }, ""),    string("openshift.io/image-signature-import"),    string("openshift.io/image-trigger"),    ... // 2 identical elements    string("openshift.io/origin-namespace"),    string("openshift.io/serviceaccount"),    strings.Join({ +  "-",    "openshift.io/serviceaccount-pull-secrets",    }, ""),    string("openshift.io/templateinstance"),    string("openshift.io/templateinstancefinalizer"),    string("openshift.io/unidling"),    },    "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f7696d1b6"...)}},    "featureGates": []any{string("BuildCSIVolumes=true")},    "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\n- \t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n \t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n \t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n \t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n \t},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-console

replicaset-controller

console-5b6cfdbd

SuccessfulCreate

Created pod: console-5b6cfdbd-5qbf5

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5b6cfdbd to 1

openshift-console

kubelet

console-5b6cfdbd-5qbf5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657"

openshift-console

multus

console-5b6cfdbd-5qbf5

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-5584b45765 to 1 from 0

openshift-authentication

replicaset-controller

oauth-openshift-5584b45765

SuccessfulCreate

Created pod: oauth-openshift-5584b45765-vxlqk

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-authentication

kubelet

oauth-openshift-6d4d899fc6-cgn6l

Killing

Stopping container oauth-openshift

openshift-authentication

replicaset-controller

oauth-openshift-6d4d899fc6

SuccessfulDelete

Deleted pod: oauth-openshift-6d4d899fc6-cgn6l

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-6d4d899fc6 to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-controller-manager

replicaset-controller

controller-manager-58c8457759

SuccessfulCreate

Created pod: controller-manager-58c8457759-bzjjl

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-58c8457759 to 1 from 0

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-7657d7494

SuccessfulDelete

Deleted pod: controller-manager-7657d7494-mmsz6

openshift-controller-manager

kubelet

controller-manager-7657d7494-mmsz6

Killing

Stopping container controller-manager

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-85f8857db4 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-654dcf5585 to 0 from 1

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85f8857db4

SuccessfulCreate

Created pod: route-controller-manager-85f8857db4-hhqvj

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well")

openshift-route-controller-manager

kubelet

route-controller-manager-654dcf5585-fgmnd

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-654dcf5585

SuccessfulDelete

Deleted pod: route-controller-manager-654dcf5585-fgmnd

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7657d7494 to 0 from 1

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-5b6cfdbd-5qbf5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" in 4.584s (4.584s including waiting). Image size: 633766177 bytes.

openshift-route-controller-manager

multus

route-controller-manager-85f8857db4-hhqvj

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-85f8857db4-hhqvj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine

openshift-console

kubelet

console-5b6cfdbd-5qbf5

Started

Started container console

openshift-console

kubelet

console-5b6cfdbd-5qbf5

Created

Created container: console

openshift-controller-manager

multus

controller-manager-58c8457759-bzjjl

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-58c8457759-bzjjl

Created

Created container: controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-85f8857db4-hhqvj

Created

Created container: route-controller-manager

openshift-controller-manager

kubelet

controller-manager-58c8457759-bzjjl

Started

Started container controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-85f8857db4-hhqvj

Started

Started container route-controller-manager

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

kubelet

controller-manager-58c8457759-bzjjl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-85f8857db4-hhqvj_a9497b64-ada3-414f-8953-d4777b37d5b2 became leader

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-58c8457759-bzjjl became leader

openshift-console

replicaset-controller

console-67bcb9df49

SuccessfulCreate

Created pod: console-67bcb9df49-d2cv6

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-67bcb9df49 to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found"

openshift-console

kubelet

console-67bcb9df49-d2cv6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapUpdated

Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig

openshift-console

kubelet

console-67bcb9df49-d2cv6

Created

Created container: console

openshift-console

multus

console-67bcb9df49-d2cv6

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-console

kubelet

console-67bcb9df49-d2cv6

Started

Started container console

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available changed from Unknown to False ("RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available message changed from "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-authentication

replicaset-controller

oauth-openshift-5584b45765

SuccessfulDelete

Deleted pod: oauth-openshift-5584b45765-vxlqk

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-5584b45765 to 0 from 1
(x2)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-64b7796859 to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

replicaset-controller

oauth-openshift-64b7796859

SuccessfulCreate

Created pod: oauth-openshift-64b7796859-6g644

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 0 replicas available"

openshift-kube-apiserver

kubelet

installer-4-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-console

kubelet

downloads-955b69498-crzjg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085" in 36.043s (36.043s including waiting). Image size: 2895784037 bytes.

openshift-console

kubelet

downloads-955b69498-crzjg

Started

Started container download-server

openshift-console

kubelet

downloads-955b69498-crzjg

Created

Created container: download-server

openshift-kube-apiserver

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-5-master-0

Started

Started container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver

kubelet

installer-5-master-0

Created

Created container: installer

openshift-authentication

multus

oauth-openshift-64b7796859-6g644

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-64b7796859-6g644

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

kubelet

oauth-openshift-64b7796859-6g644

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-64b7796859-6g644

Created

Created container: oauth-openshift

openshift-console

kubelet

downloads-955b69498-crzjg

ProbeError

Liveness probe error: Get "http://10.128.0.86:8080/": dial tcp 10.128.0.86:8080: connect: connection refused body:
(x3)

openshift-console

kubelet

downloads-955b69498-crzjg

Unhealthy

Readiness probe failed: Get "http://10.128.0.86:8080/": dial tcp 10.128.0.86:8080: connect: connection refused
(x3)

openshift-console

kubelet

downloads-955b69498-crzjg

ProbeError

Readiness probe error: Get "http://10.128.0.86:8080/": dial tcp 10.128.0.86:8080: connect: connection refused body:

openshift-console

kubelet

downloads-955b69498-crzjg

Unhealthy

Liveness probe failed: Get "http://10.128.0.86:8080/": dial tcp 10.128.0.86:8080: connect: connection refused

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.239.24:443/healthz\": dial tcp 172.30.239.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "All is well"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"}] to [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"} {"oauth-openshift" "4.18.33_openshift"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.33_openshift"

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller

etcd-operator

EtcdCertSignerControllerUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/build.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/image.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/project.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_3cc6bbe6-f9f9-4666-85ae-a925b6d4b08e became leader
(x10)

openshift-console

kubelet

console-5b6cfdbd-5qbf5

Unhealthy

Startup probe failed: Get "https://10.128.0.87:8443/health": dial tcp 10.128.0.87:8443: connect: connection refused
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused
(x11)

openshift-console

kubelet

console-5b6cfdbd-5qbf5

ProbeError

Startup probe error: Get "https://10.128.0.87:8443/health": dial tcp 10.128.0.87:8443: connect: connection refused body:
(x11)

openshift-console

kubelet

console-67bcb9df49-d2cv6

ProbeError

Startup probe error: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused body:

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor
(x11)

openshift-console

kubelet

console-67bcb9df49-d2cv6

Unhealthy

Startup probe failed: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-7fmeibjvdhibm -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-7fmeibjvdhibm -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"497c5735-b1e3-47ab-aac8-5149fd7cf3d5\", ResourceVersion:\"16575\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 5, 8, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 5, 37, 19, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001a010c8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-8csin008gjsd0 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-8csin008gjsd0 -n openshift-monitoring because it was missing

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_21d31b39-8bc1-4fa4-a0c5-86671b9f4bf5 became leader

openshift-monitoring

replicaset-controller

metrics-server-7bf9b765b9

SuccessfulCreate

Created pod: metrics-server-7bf9b765b9-b9fxz

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-d588d74dc to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7bf9b765b9 to 1

openshift-monitoring

replicaset-controller

metrics-server-7bf9b765b9

SuccessfulCreate

Created pod: metrics-server-7bf9b765b9-b9fxz

openshift-monitoring

replicaset-controller

metrics-server-65cdf565cd

SuccessfulDelete

Deleted pod: metrics-server-65cdf565cd-555rj

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Killing

Stopping container metrics-server

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-d588d74dc to 1

openshift-monitoring

replicaset-controller

thanos-querier-d588d74dc

SuccessfulCreate

Created pod: thanos-querier-d588d74dc-gmlm4

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5b6cfdbd to 0 from 1

openshift-monitoring

replicaset-controller

thanos-querier-d588d74dc

SuccessfulCreate

Created pod: thanos-querier-d588d74dc-gmlm4

openshift-console

replicaset-controller

console-5b6cfdbd

SuccessfulDelete

Deleted pod: console-5b6cfdbd-5qbf5

openshift-network-console

replicaset-controller

networking-console-plugin-79f587d78f

SuccessfulCreate

Created pod: networking-console-plugin-79f587d78f-bctpb

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-79f587d78f to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-65cdf565cd to 0 from 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-fkuahuqkfbhtv -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7bf9b765b9 to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-65cdf565cd to 0 from 1

openshift-monitoring

replicaset-controller

metrics-server-65cdf565cd

SuccessfulDelete

Deleted pod: metrics-server-65cdf565cd-555rj

openshift-monitoring

kubelet

metrics-server-65cdf565cd-555rj

Killing

Stopping container metrics-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-fkuahuqkfbhtv -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1"

openshift-monitoring

multus

thanos-querier-d588d74dc-gmlm4

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-7bf9b765b9-b9fxz

Created

Created container: metrics-server

openshift-monitoring

multus

thanos-querier-d588d74dc-gmlm4

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-monitoring

multus

metrics-server-7bf9b765b9-b9fxz

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-7bf9b765b9-b9fxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" already present on machine

openshift-monitoring

multus

metrics-server-7bf9b765b9-b9fxz

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-7bf9b765b9-b9fxz

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-7bf9b765b9-b9fxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" already present on machine

openshift-monitoring

kubelet

metrics-server-7bf9b765b9-b9fxz

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-7bf9b765b9-b9fxz

Created

Created container: metrics-server

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1"

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" in 2.369s (2.369s including waiting). Image size: 502604403 bytes.

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" in 2.369s (2.369s including waiting). Image size: 502604403 bytes.

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229"

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229"

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" in 985ms (985ms including waiting). Image size: 412998070 bytes.

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" in 985ms (985ms including waiting). Image size: 412998070 bytes.

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-d588d74dc-gmlm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-flsqf

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-image-registry

kubelet

node-ca-flsqf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:016ce2c441bfe2106222cd1285f2db09e8cf3712396d4bfbb52fdacb832aa1da"

openshift-image-registry

kubelet

node-ca-flsqf

Started

Started container node-ca

openshift-image-registry

kubelet

node-ca-flsqf

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-flsqf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:016ce2c441bfe2106222cd1285f2db09e8cf3712396d4bfbb52fdacb832aa1da" in 2.01s (2.01s including waiting). Image size: 481536115 bytes.

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/downloads\": dial tcp 172.30.0.1:443: connect: connection refused",Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "DownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful
(x6)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" : secret "prometheus-k8s-thanos-sidecar-tls" not found
(x6)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" : secret "prometheus-k8s-thanos-sidecar-tls" not found
(x6)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : secret "prometheus-k8s-tls" not found
(x6)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : secret "prometheus-k8s-tls" not found
(x7)

openshift-monitoring

kubelet

alertmanager-main-0

FailedMount

MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : secret "alertmanager-main-tls" not found
(x7)

openshift-monitoring

kubelet

alertmanager-main-0

FailedMount

MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : secret "alertmanager-main-tls" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused"
(x8)

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-bctpb

FailedMount

MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found

openshift-console

replicaset-controller

console-7875b98987

SuccessfulCreate

Created pod: console-7875b98987-bmnll

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7875b98987 to 1

openshift-console

kubelet

console-7875b98987-bmnll

Created

Created container: console

openshift-console

kubelet

console-7875b98987-bmnll

Started

Started container console

openshift-console

kubelet

console-7875b98987-bmnll

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-console

multus

console-7875b98987-bmnll

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-console

replicaset-controller

console-67bcb9df49

SuccessfulDelete

Deleted pod: console-67bcb9df49-d2cv6

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92"

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92"

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-67bcb9df49 to 0 from 1

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-console

replicaset-controller

console-6f64db7f86

SuccessfulCreate

Created pod: console-6f64db7f86-6brp5

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6f64db7f86 to 1

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-console

multus

console-6f64db7f86-6brp5

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" in 3.321s (3.321s including waiting). Image size: 605597321 bytes.

openshift-console

kubelet

console-6f64db7f86-6brp5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-console

kubelet

console-6f64db7f86-6brp5

Created

Created container: console

openshift-console

kubelet

console-6f64db7f86-6brp5

Started

Started container console

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" in 3.321s (3.321s including waiting). Image size: 605597321 bytes.

openshift-console

replicaset-controller

console-7875b98987

SuccessfulDelete

Deleted pod: console-7875b98987-bmnll

openshift-console

kubelet

console-7875b98987-bmnll

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7875b98987 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553"

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553"

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" in 1.563s (1.563s including waiting). Image size: 467433909 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" in 1.563s (1.563s including waiting). Image size: 467433909 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-network-console

multus

networking-console-plugin-79f587d78f-bctpb

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-bctpb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbffd1dbbfea8326edd5142aaed93290359c152c805239f2ffc77a21b6648490"

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-bctpb

Started

Started container networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-bctpb

Created

Created container: networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-bctpb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbffd1dbbfea8326edd5142aaed93290359c152c805239f2ffc77a21b6648490" in 1.403s (1.403s including waiting). Image size: 446757716 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6d5c5b46fd to 1

openshift-console

replicaset-controller

console-6d5c5b46fd

SuccessfulCreate

Created pod: console-6d5c5b46fd-qr4b5

openshift-console

kubelet

console-6d5c5b46fd-qr4b5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-console

multus

console-6d5c5b46fd-qr4b5

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-console

kubelet

console-6d5c5b46fd-qr4b5

Created

Created container: console

openshift-console

kubelet

console-6d5c5b46fd-qr4b5

Started

Started container console

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6f64db7f86 to 0 from 1

openshift-console

replicaset-controller

console-6f64db7f86

SuccessfulDelete

Deleted pod: console-6f64db7f86-6brp5

openshift-console

kubelet

console-6f64db7f86-6brp5

Killing

Stopping container console

openshift-console

replicaset-controller

console-576fb8b7f5

SuccessfulCreate

Created pod: console-576fb8b7f5-srlps

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-576fb8b7f5 to 1

openshift-console

kubelet

console-576fb8b7f5-srlps

Started

Started container console

openshift-console

kubelet

console-576fb8b7f5-srlps

Created

Created container: console

openshift-console

multus

console-576fb8b7f5-srlps

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-console

kubelet

console-576fb8b7f5-srlps

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_1b0a04ca-9fdc-4e93-b0fe-154d824e9231 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_60ddc269-7cd6-4b97-9867-9667ff0a1b3e became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 4 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_6cb27ad5-ce9f-4a5e-aaf2-55574c5b374e became leader

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-rjgth

Pulling

Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490"

openshift-console

replicaset-controller

console-6d5c5b46fd

SuccessfulDelete

Deleted pod: console-6d5c5b46fd-qr4b5

sushy-emulator

multus

sushy-emulator-78f6d7d749-rjgth

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulCreate

Created pod: sushy-emulator-78f6d7d749-rjgth

openshift-console

kubelet

console-6d5c5b46fd-qr4b5

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6d5c5b46fd to 0 from 1

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-78f6d7d749 to 1

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-rjgth

Created

Created container: sushy-emulator

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-rjgth

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" in 6.651s (6.651s including waiting). Image size: 325685589 bytes.

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-rjgth

Started

Started container sushy-emulator

sushy-emulator

replicaset-controller

nova-console-poller-5bbdbdc4dc

SuccessfulCreate

Created pod: nova-console-poller-5bbdbdc4dc-t2lxm

sushy-emulator

deployment-controller

nova-console-poller

ScalingReplicaSet

Scaled up replica set nova-console-poller-5bbdbdc4dc to 1

sushy-emulator

multus

nova-console-poller-5bbdbdc4dc-t2lxm

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Started

Started container console-poller-8f435b47-90f3-4d07-864b-e312b54597e5

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Created

Created container: console-poller-8f435b47-90f3-4d07-864b-e312b54597e5

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 5.291s (5.291s including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Created

Created container: console-poller-35a4f56f-0126-4647-89c0-33250eeb2549

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 400ms (400ms including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-5bbdbdc4dc-t2lxm

Started

Started container console-poller-35a4f56f-0126-4647-89c0-33250eeb2549

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531865

SuccessfulCreate

Created pod: collect-profiles-29531865-5wmht

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531865

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531865-5wmht

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531865-5wmht

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531865-5wmht

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29531865-5wmht

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

sushy-emulator

replicaset-controller

nova-console-recorder-7b97cdbf9f

SuccessfulCreate

Created pod: nova-console-recorder-7b97cdbf9f-vzh2n

sushy-emulator

deployment-controller

nova-console-recorder

ScalingReplicaSet

Scaled up replica set nova-console-recorder-7b97cdbf9f to 1

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531865, condition: Complete

sushy-emulator

multus

nova-console-recorder-7b97cdbf9f-vzh2n

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531865

Completed

Job completed

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Created

Created container: console-recorder-8f435b47-90f3-4d07-864b-e312b54597e5

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 7.598s (7.598s including waiting). Image size: 664134874 bytes.

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Started

Started container console-recorder-8f435b47-90f3-4d07-864b-e312b54597e5

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Started

Started container console-recorder-35a4f56f-0126-4647-89c0-33250eeb2549

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Created

Created container: console-recorder-35a4f56f-0126-4647-89c0-33250eeb2549

sushy-emulator

kubelet

nova-console-recorder-7b97cdbf9f-vzh2n

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 514ms (514ms including waiting). Image size: 664134874 bytes.

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Created

Created container: util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 2.029s (2.029s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Started

Started container pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Created

Created container: pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Created

Created container: extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp

Started

Started container extract

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-7fd9747c7b to 1

openshift-storage

replicaset-controller

lvms-operator-7fd9747c7b

SuccessfulCreate

Created pod: lvms-operator-7fd9747c7b-h8dsz
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-7fd9747c7b to 1
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

replicaset-controller

lvms-operator-7fd9747c7b

SuccessfulCreate

Created pod: lvms-operator-7fd9747c7b-h8dsz
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

multus

lvms-operator-7fd9747c7b-h8dsz

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openshift-storage

multus

lvms-operator-7fd9747c7b-h8dsz

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Started

Started container manager

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Started

Started container manager

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.867s (4.867s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-7fd9747c7b-h8dsz

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.867s (4.867s including waiting). Image size: 238305644 bytes.

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

SuccessfulCreate

Created pod: a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

SuccessfulCreate

Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Created

Created container: util

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

SuccessfulCreate

Created pod: f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Created

Created container: util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Started

Started container util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

multus

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Started

Started container util

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

multus

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908"

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1"

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Started

Started container util

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Created

Created container: util

openshift-marketplace

multus

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Started

Started container pull

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" in 1.167s (1.167s including waiting). Image size: 329517 bytes.

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf"

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 2.996s (2.996s including waiting). Image size: 108352841 bytes.

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Started

Started container extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" in 2.212s (2.212s including waiting). Image size: 176636 bytes.

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6

Created

Created container: extract

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Created

Created container: pull

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Started

Started container pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Started

Started container pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Created

Created container: extract

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Created

Created container: extract

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f

Started

Started container extract

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h

Started

Started container extract

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

SuccessfulCreate

Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Created

Created container: util

openshift-marketplace

multus

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

Completed

Job completed

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Started

Started container util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

Completed

Job completed

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6"

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

Completed

Job completed

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsUnknown

requirements not yet checked

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Created

Created container: pull

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsNotMet

one or more requirements couldn't be found

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Created

Created container: extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Started

Started container extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Started

Started container pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.606s (1.606s including waiting). Image size: 4900233 bytes.

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsNotMet

one or more requirements couldn't be found

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

Completed

Job completed

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsUnknown

requirements not yet checked

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsUnknown

requirements not yet checked

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-694c9596b7 to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-694c9596b7 to 1

openshift-nmstate

replicaset-controller

nmstate-operator-694c9596b7

SuccessfulCreate

Created pod: nmstate-operator-694c9596b7-8jfxc

openshift-nmstate

replicaset-controller

nmstate-operator-694c9596b7

SuccessfulCreate

Created pod: nmstate-operator-694c9596b7-8jfxc

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

multus

nmstate-operator-694c9596b7-8jfxc

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce"

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce"

openshift-nmstate

multus

nmstate-operator-694c9596b7-8jfxc

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-f5b8c49d9

SuccessfulCreate

Created pod: metallb-operator-webhook-server-f5b8c49d9-w75vs

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-688bdcdc8c to 1

metallb-system

replicaset-controller

metallb-operator-controller-manager-688bdcdc8c

SuccessfulCreate

Created pod: metallb-operator-controller-manager-688bdcdc8c-4mpqv

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-688bdcdc8c to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-f5b8c49d9

SuccessfulCreate

Created pod: metallb-operator-webhook-server-f5b8c49d9-w75vs

metallb-system

replicaset-controller

metallb-operator-controller-manager-688bdcdc8c

SuccessfulCreate

Created pod: metallb-operator-controller-manager-688bdcdc8c-4mpqv

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-f5b8c49d9 to 1

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-f5b8c49d9 to 1

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e"

metallb-system

multus

metallb-operator-webhook-server-f5b8c49d9-w75vs

AddedInterface

Add eth0 [10.128.0.117/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854"

metallb-system

multus

metallb-operator-controller-manager-688bdcdc8c-4mpqv

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes
(x9)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

metallb-system

multus

metallb-operator-webhook-server-f5b8c49d9-w75vs

AddedInterface

Add eth0 [10.128.0.117/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-pxvzq

metallb-system

multus

metallb-operator-controller-manager-688bdcdc8c-4mpqv

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e"
(x9)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-pxvzq

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854"

cert-manager

multus

cert-manager-webhook-6888856db4-pxvzq

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-webhook-6888856db4-pxvzq

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-vhdf8

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-vhdf8

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

cert-manager

multus

cert-manager-cainjector-5545bd876-vhdf8

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-cainjector-5545bd876-vhdf8

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

metallb-system

metallb-operator-controller-manager-688bdcdc8c-4mpqv_ea3b7fb7-31df-4584-824e-7e7c4691a656

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-688bdcdc8c-4mpqv_ea3b7fb7-31df-4584-824e-7e7c4691a656 became leader

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Created

Created container: manager

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Created

Created container: manager

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Started

Started container manager

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 9.764s (9.764s including waiting). Image size: 451308023 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 5.773s (5.773s including waiting). Image size: 462337664 bytes.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Started

Started container nmstate-operator

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 5.773s (5.773s including waiting). Image size: 462337664 bytes.

metallb-system

metallb-operator-controller-manager-688bdcdc8c-4mpqv_ea3b7fb7-31df-4584-824e-7e7c4691a656

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-688bdcdc8c-4mpqv_ea3b7fb7-31df-4584-824e-7e7c4691a656 became leader

metallb-system

kubelet

metallb-operator-controller-manager-688bdcdc8c-4mpqv

Started

Started container manager

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 9.764s (9.764s including waiting). Image size: 451308023 bytes.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-8jfxc

Started

Started container nmstate-operator

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

install strategy completed with no errors

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

install-fp62x

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

install-fp62x

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

NeedsReinstall

calculated deployment install is bad

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

NeedsReinstall

calculated deployment install is bad

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Started

Started container cert-manager-webhook
(x12)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-gmzdr

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Created

Created container: cert-manager-webhook

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Started

Started container cert-manager-webhook
(x12)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-gmzdr

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Created

Created container: cert-manager-cainjector

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

AllRequirementsMet

all requirements found, attempting install

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.118s (6.118s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Started

Started container cert-manager-cainjector
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

AllRequirementsMet

all requirements found, attempting install

cert-manager

kubelet

cert-manager-cainjector-5545bd876-vhdf8

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.118s (6.118s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 7.807s (7.807s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-pxvzq

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 7.807s (7.807s including waiting). Image size: 319887149 bytes.

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-f46855c6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-gmzdr

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-tgbdb

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-jbdsj

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-f46855c6 to 2

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-f46855c6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-f46855c6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-f46855c6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-f46855c6 to 2

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-tgbdb

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-gmzdr

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-jbdsj

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

kube-system

cert-manager-cainjector-5545bd876-vhdf8_03aa3ae9-0d78-425b-b7e6-1138fa070a4d

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-5545bd876-vhdf8_03aa3ae9-0d78-425b-b7e6-1138fa070a4d became leader

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

waiting for install components to report healthy

openshift-operators

multus

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

observability-operator-59bdc8b94-tgbdb

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

multus

perses-operator-5bf474d74f-jbdsj

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

observability-operator-59bdc8b94-tgbdb

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

openshift-operators

multus

perses-operator-5bf474d74f-jbdsj

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

waiting for install components to report healthy

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-ss7w9

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 18.743s (18.743s including waiting). Image size: 554925471 bytes.

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 18.743s (18.743s including waiting). Image size: 554925471 bytes.

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-ss7w9

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 11.637s (11.637s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 11.637s (11.637s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.508s (11.508s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.508s (11.508s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.735s (11.735s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 11.857s (11.857s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 11.857s (11.857s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.112s (11.112s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.112s (11.112s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.735s (11.735s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Started

Started container prometheus-operator-admission-webhook

cert-manager

kubelet

cert-manager-545d4d4674-ss7w9

Started

Started container cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-ss7w9

Created

Created container: cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-ss7w9

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Started

Started container perses-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Created

Created container: perses-operator

cert-manager

multus

cert-manager-545d4d4674-ss7w9

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Created

Created container: perses-operator

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Started

Started container operator

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Created

Created container: webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Started

Started container webhook-server

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Created

Created container: operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Started

Started container prometheus-operator

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Started

Started container operator

openshift-operators

kubelet

observability-operator-59bdc8b94-tgbdb

Created

Created container: operator

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-545d4d4674-ss7w9-external-cert-manager-controller became leader

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-qm7sz

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-f46855c6-pq8bs

Created

Created container: prometheus-operator-admission-webhook

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Started

Started container webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-f5b8c49d9-w75vs

Created

Created container: webhook-server

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-gmzdr

Created

Created container: prometheus-operator

cert-manager

kubelet

cert-manager-545d4d4674-ss7w9

Created

Created container: cert-manager-controller

cert-manager

multus

cert-manager-545d4d4674-ss7w9

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

openshift-operators

kubelet

perses-operator-5bf474d74f-jbdsj

Started

Started container perses-operator

cert-manager

kubelet

cert-manager-545d4d4674-ss7w9

Started

Started container cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-ss7w9

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

install strategy completed with no errors

metallb-system

replicaset-controller

frr-k8s-webhook-server-78b44bf5bb

SuccessfulCreate

Created pod: frr-k8s-webhook-server-78b44bf5bb-9rc2g

metallb-system

replicaset-controller

controller-69bbfbf88f

SuccessfulCreate

Created pod: controller-69bbfbf88f-hnk7l

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-fsm64

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-fsm64

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1

metallb-system

replicaset-controller

controller-69bbfbf88f

SuccessfulCreate

Created pod: controller-69bbfbf88f-hnk7l

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-69bbfbf88f to 1

metallb-system

replicaset-controller

frr-k8s-webhook-server-78b44bf5bb

SuccessfulCreate

Created pod: frr-k8s-webhook-server-78b44bf5bb-9rc2g

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-tds5c

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-tds5c

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 0e0a9ec9-4bef-44a3-aed8-ea5c7ef8bf40] does not exist in namespace ""

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-69bbfbf88f to 1

metallb-system

kubelet

frr-k8s-fsm64

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

multus

frr-k8s-webhook-server-78b44bf5bb-9rc2g

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

kubelet

frr-k8s-fsm64

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found

metallb-system

multus

frr-k8s-webhook-server-78b44bf5bb-9rc2g

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Created

Created container: controller

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-866bcb46dc to 1

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Started

Started container controller

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Started

Started container controller

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Created

Created container: controller

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine
(x3)

metallb-system

kubelet

speaker-tds5c

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

multus

controller-69bbfbf88f-hnk7l

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-r6rsr

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-866bcb46dc to 1

openshift-nmstate

replicaset-controller

nmstate-webhook-866bcb46dc

SuccessfulCreate

Created pod: nmstate-webhook-866bcb46dc-qp4cm

metallb-system

multus

controller-69bbfbf88f-hnk7l

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-58c85c668d to 1

openshift-nmstate

replicaset-controller

nmstate-metrics-58c85c668d

SuccessfulCreate

Created pod: nmstate-metrics-58c85c668d-c85cm

openshift-nmstate

replicaset-controller

nmstate-metrics-58c85c668d

SuccessfulCreate

Created pod: nmstate-metrics-58c85c668d-c85cm

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-58c85c668d to 1

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-r6rsr

openshift-nmstate

replicaset-controller

nmstate-webhook-866bcb46dc

SuccessfulCreate

Created pod: nmstate-webhook-866bcb46dc-qp4cm
(x3)

metallb-system

kubelet

speaker-tds5c

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

multus

nmstate-console-plugin-5c78fc5d65-447df

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes
(x5)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml
(x5)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")

openshift-nmstate

multus

nmstate-metrics-58c85c668d-c85cm

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"
(x5)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 1 replicas available"

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-console

replicaset-controller

console-d54bc7dc7

SuccessfulCreate

Created pod: console-d54bc7dc7-5mlqz

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.28s (1.28s including waiting). Image size: 464984427 bytes.

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.28s (1.28s including waiting). Image size: 464984427 bytes.

openshift-nmstate

multus

nmstate-console-plugin-5c78fc5d65-447df

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5c78fc5d65

SuccessfulCreate

Created pod: nmstate-console-plugin-5c78fc5d65-447df

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1

openshift-nmstate

multus

nmstate-webhook-866bcb46dc-qp4cm

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes
(x11)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5c78fc5d65

SuccessfulCreate

Created pod: nmstate-console-plugin-5c78fc5d65-447df

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-d54bc7dc7 to 1

openshift-nmstate

multus

nmstate-webhook-866bcb46dc-qp4cm

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-metrics-58c85c668d-c85cm

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Started

Started container kube-rbac-proxy

openshift-console

kubelet

console-d54bc7dc7-5mlqz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Started

Started container kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-hnk7l

Created

Created container: kube-rbac-proxy

openshift-console

kubelet

console-d54bc7dc7-5mlqz

Started

Started container console

openshift-console

multus

console-d54bc7dc7-5mlqz

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

metallb-system

kubelet

speaker-tds5c

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078"

openshift-console

kubelet

console-d54bc7dc7-5mlqz

Created

Created container: console

metallb-system

kubelet

speaker-tds5c

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078"

metallb-system

kubelet

speaker-tds5c

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-tds5c

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

speaker-tds5c

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

speaker-tds5c

Started

Started container speaker

metallb-system

kubelet

speaker-tds5c

Created

Created container: speaker

metallb-system

kubelet

speaker-tds5c

Created

Created container: speaker

metallb-system

kubelet

speaker-tds5c

Started

Started container speaker

metallb-system

kubelet

speaker-tds5c

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-tds5c

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-tds5c

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.1s (8.1s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.467s (6.467s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.467s (6.467s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.895s (5.895s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.45s (8.45s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 5.853s (5.853s including waiting). Image size: 453642085 bytes.

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.45s (8.45s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 5.853s (5.853s including waiting). Image size: 453642085 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.1s (8.1s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.015s (6.015s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.895s (5.895s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.015s (6.015s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Started

Started container nmstate-webhook

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Started

Started container nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-c85cm

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-qp4cm

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-r6rsr

Created

Created container: nmstate-handler

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Created

Created container: frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-9rc2g

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Started

Started container nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-447df

Created

Created container: nmstate-console-plugin

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: controller

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: controller

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container controller

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container controller

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: frr

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: frr

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container frr

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container frr

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container reloader

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container reloader

metallb-system

kubelet

frr-k8s-fsm64

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

frr-k8s-fsm64

Created

Created container: reloader

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-576fb8b7f5 to 0 from 1
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.33, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 2 replicas available"

openshift-console

kubelet

console-576fb8b7f5-srlps

Killing

Stopping container console

openshift-console

replicaset-controller

console-576fb8b7f5

SuccessfulDelete

Deleted pod: console-576fb8b7f5-srlps
(x5)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-r5t2w

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-r5t2w

openshift-storage

multus

vg-manager-r5t2w

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

openshift-storage

multus

vg-manager-r5t2w

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes
(x12)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io
(x12)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io
(x2)

openshift-storage

kubelet

vg-manager-r5t2w

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-r5t2w

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-r5t2w

Created

Created container: vg-manager
(x2)

openshift-storage

kubelet

vg-manager-r5t2w

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-r5t2w

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-r5t2w

Created

Created container: vg-manager

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openstack-operators

kubelet

openstack-operator-index-tptx6

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-tptx6

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-tptx6

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-tptx6

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-tptx6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 894ms (894ms including waiting). Image size: 918506145 bytes.

openstack-operators

kubelet

openstack-operator-index-tptx6

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-tptx6

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-tptx6

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-tptx6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 894ms (894ms including waiting). Image size: 918506145 bytes.

openstack-operators

kubelet

openstack-operator-index-tptx6

Started

Started container registry-server
(x9)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-tptx6

Killing

Stopping container registry-server

openshift-console

kubelet

console-576fb8b7f5-srlps

Unhealthy

Readiness probe failed: Get "https://10.128.0.103:8443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-console

kubelet

console-576fb8b7f5-srlps

ProbeError

Readiness probe error: Get "https://10.128.0.103:8443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openstack-operators

kubelet

openstack-operator-index-tptx6

Killing

Stopping container registry-server

openstack-operators

kubelet

openstack-operator-index-2pkfs

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-2pkfs

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-2pkfs

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-2pkfs

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-2pkfs

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-2pkfs

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 670ms (670ms including waiting). Image size: 918506145 bytes.

openstack-operators

kubelet

openstack-operator-index-2pkfs

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-2pkfs

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 670ms (670ms including waiting). Image size: 918506145 bytes.

openstack-operators

kubelet

openstack-operator-index-2pkfs

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-2pkfs

Started

Started container registry-server

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.240.58:50051: connect: connection refused"

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

SuccessfulCreate

Created pod: 11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

SuccessfulCreate

Created pod: 11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Created

Created container: util

openstack-operators

multus

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Started

Started container util

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Started

Started container util

openstack-operators

multus

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Created

Created container: util

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d" in 745ms (745ms including waiting). Image size: 115772 bytes.

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d"

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d"

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d" in 745ms (745ms including waiting). Image size: 115772 bytes.

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Started

Started container extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Created

Created container: extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Created

Created container: pull

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Started

Started container pull

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Created

Created container: extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Started

Started container extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Started

Started container pull

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz

Created

Created container: pull

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

Completed

Job completed

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-55c649df44 to 1

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-55c649df44 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-init-55c649df44

SuccessfulCreate

Created pod: openstack-operator-controller-init-55c649df44-8xq4x

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

replicaset-controller

openstack-operator-controller-init-55c649df44

SuccessfulCreate

Created pod: openstack-operator-controller-init-55c649df44-8xq4x

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785"

openstack-operators

multus

openstack-operator-controller-init-55c649df44-8xq4x

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785"

openstack-operators

multus

openstack-operator-controller-init-55c649df44-8xq4x

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" in 4.968s (4.968s including waiting). Image size: 293229892 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" in 4.968s (4.968s including waiting). Image size: 293229892 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-8xq4x

Started

Started container operator

openstack-operators

openstack-operator-controller-init-55c649df44-8xq4x_58c35375-bfe5-41bf-bc19-07dad8ada124

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-55c649df44-8xq4x_58c35375-bfe5-41bf-bc19-07dad8ada124 became leader

openstack-operators

openstack-operator-controller-init-55c649df44-8xq4x_58c35375-bfe5-41bf-bc19-07dad8ada124

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-55c649df44-8xq4x_58c35375-bfe5-41bf-bc19-07dad8ada124 became leader

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-jn5cq"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-5rbtd"

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-jn5cq"

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-5rbtd"

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-vkgmm"

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-vkgmm"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-fbrqm"

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-fbrqm"

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-ptvkl"

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-lxggx"

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-5c59m"

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-j8pq2"

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-j8pq2"

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-ptvkl"

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-5c59m"

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-lxggx"

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

barbican-operator-controller-manager-868647ff47

SuccessfulCreate

Created pod: barbican-operator-controller-manager-868647ff47-rngmn

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-m52ng

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-2nfbw"

openstack-operators

replicaset-controller

glance-operator-controller-manager-784b5bb6c5

SuccessfulCreate

Created pod: glance-operator-controller-manager-784b5bb6c5-zghgv

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-784b5bb6c5 to 1

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

barbican-operator-controller-manager-868647ff47

SuccessfulCreate

Created pod: barbican-operator-controller-manager-868647ff47-rngmn

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-tk5pr"

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-69f49c598c to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-69f49c598c

SuccessfulCreate

Created pod: heat-operator-controller-manager-69f49c598c-75df9

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-8rhwv"

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-2nfbw"

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-m52ng

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-784b5bb6c5 to 1

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

glance-operator-controller-manager-784b5bb6c5

SuccessfulCreate

Created pod: glance-operator-controller-manager-784b5bb6c5-zghgv

openstack-operators

replicaset-controller

heat-operator-controller-manager-69f49c598c

SuccessfulCreate

Created pod: heat-operator-controller-manager-69f49c598c-75df9

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-6d8bf5c495

SuccessfulCreate

Created pod: designate-operator-controller-manager-6d8bf5c495-vq97j

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-69f49c598c to 1

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-6d8bf5c495

SuccessfulCreate

Created pod: designate-operator-controller-manager-6d8bf5c495-vq97j

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-8rhwv"

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-tk5pr"

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-5dc6794d5b

SuccessfulCreate

Created pod: test-operator-controller-manager-5dc6794d5b-96zg4

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-5dc486cffc to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-5dc486cffc

SuccessfulCreate

Created pod: openstack-operator-controller-manager-5dc486cffc-rbqzr

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-68f46476f to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-567668f5cf

SuccessfulCreate

Created pod: nova-operator-controller-manager-567668f5cf-sfjt8

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-567668f5cf to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-68f46476f

SuccessfulCreate

Created pod: swift-operator-controller-manager-68f46476f-tc9k2

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-qbghx

openstack-operators

replicaset-controller

octavia-operator-controller-manager-659dc6bbfc

SuccessfulCreate

Created pod: octavia-operator-controller-manager-659dc6bbfc-z4h54

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-659dc6bbfc to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-589c568786

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-589c568786-9ljm5

openstack-operators

replicaset-controller

placement-operator-controller-manager-8497b45c89

SuccessfulCreate

Created pod: placement-operator-controller-manager-8497b45c89-8xrtm

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-zsvbz"

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-579b7786b9

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-579b7786b92xw4j

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-589c568786

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-589c568786-9ljm5

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-589c568786 to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-5f879c76b6

SuccessfulCreate

Created pod: infra-operator-controller-manager-5f879c76b6-bv48m

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-6994f66f48

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-6994f66f48-28hdf

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1

openstack-operators

replicaset-controller

keystone-operator-controller-manager-b4d948c87

SuccessfulCreate

Created pod: keystone-operator-controller-manager-b4d948c87-zlj5w

openstack-operators

replicaset-controller

infra-operator-controller-manager-5f879c76b6

SuccessfulCreate

Created pod: infra-operator-controller-manager-5f879c76b6-bv48m

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-589c568786 to 1

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-vrbmh

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-579b7786b9 to 1

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-6bd4687957 to 1

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-579b7786b9 to 1

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-5dc486cffc to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-554564d7fc

SuccessfulCreate

Created pod: ironic-operator-controller-manager-554564d7fc-db24j

openstack-operators

replicaset-controller

placement-operator-controller-manager-8497b45c89

SuccessfulCreate

Created pod: placement-operator-controller-manager-8497b45c89-8xrtm

openstack-operators

replicaset-controller

test-operator-controller-manager-5dc6794d5b

SuccessfulCreate

Created pod: test-operator-controller-manager-5dc6794d5b-96zg4

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-5dc6794d5b to 1

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-vrbmh

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-579b7786b9

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-579b7786b92xw4j

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-zsvbz"

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1

openstack-operators

replicaset-controller

neutron-operator-controller-manager-6bd4687957

SuccessfulCreate

Created pod: neutron-operator-controller-manager-6bd4687957-svmn2

openstack-operators

replicaset-controller

horizon-operator-controller-manager-5b9b8895d5

SuccessfulCreate

Created pod: horizon-operator-controller-manager-5b9b8895d5-gmljt

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1

openstack-operators

replicaset-controller

neutron-operator-controller-manager-6bd4687957

SuccessfulCreate

Created pod: neutron-operator-controller-manager-6bd4687957-svmn2

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-6bd4687957 to 1

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-96xg2

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-659dc6bbfc to 1

openstack-operators

replicaset-controller

octavia-operator-controller-manager-659dc6bbfc

SuccessfulCreate

Created pod: octavia-operator-controller-manager-659dc6bbfc-z4h54

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-96xg2

openstack-operators

replicaset-controller

ovn-operator-controller-manager-5955d8c787

SuccessfulCreate

Created pod: ovn-operator-controller-manager-5955d8c787-zbd8b

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

replicaset-controller

openstack-operator-controller-manager-5dc486cffc

SuccessfulCreate

Created pod: openstack-operator-controller-manager-5dc486cffc-rbqzr

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-5955d8c787 to 1

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

swift-operator-controller-manager-68f46476f

SuccessfulCreate

Created pod: swift-operator-controller-manager-68f46476f-tc9k2

openstack-operators

replicaset-controller

ironic-operator-controller-manager-554564d7fc

SuccessfulCreate

Created pod: ironic-operator-controller-manager-554564d7fc-db24j

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-6994f66f48

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-6994f66f48-28hdf

openstack-operators

replicaset-controller

ovn-operator-controller-manager-5955d8c787

SuccessfulCreate

Created pod: ovn-operator-controller-manager-5955d8c787-zbd8b

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-5b9b8895d5

SuccessfulCreate

Created pod: horizon-operator-controller-manager-5b9b8895d5-gmljt

openstack-operators

replicaset-controller

keystone-operator-controller-manager-b4d948c87

SuccessfulCreate

Created pod: keystone-operator-controller-manager-b4d948c87-zlj5w

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-567668f5cf to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-567668f5cf

SuccessfulCreate

Created pod: nova-operator-controller-manager-567668f5cf-sfjt8

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-5955d8c787 to 1

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-5dc6794d5b to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-qbghx

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-68f46476f to 1

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2"

openstack-operators

multus

horizon-operator-controller-manager-5b9b8895d5-gmljt

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

horizon-operator-controller-manager-5b9b8895d5-gmljt

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da"

openstack-operators

multus

barbican-operator-controller-manager-868647ff47-rngmn

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc"

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-j94zf"

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

heat-operator-controller-manager-69f49c598c-75df9

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be"

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

glance-operator-controller-manager-784b5bb6c5-zghgv

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-m52ng

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2"

openstack-operators

multus

heat-operator-controller-manager-69f49c598c-75df9

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

barbican-operator-controller-manager-868647ff47-rngmn

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-j94zf"

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da"

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

glance-operator-controller-manager-784b5bb6c5-zghgv

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

multus

designate-operator-controller-manager-6d8bf5c495-vq97j

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642"

openstack-operators

multus

designate-operator-controller-manager-6d8bf5c495-vq97j

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc"

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642"

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-m52ng

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be"

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867"

openstack-operators

multus

keystone-operator-controller-manager-b4d948c87-zlj5w

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1"

openstack-operators

multus

swift-operator-controller-manager-68f46476f-tc9k2

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838"

openstack-operators

multus

nova-operator-controller-manager-567668f5cf-sfjt8

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

multus

placement-operator-controller-manager-8497b45c89-8xrtm

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

multus

mariadb-operator-controller-manager-6994f66f48-28hdf

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a"

openstack-operators

multus

mariadb-operator-controller-manager-6994f66f48-28hdf

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-96xg2

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-96xg2

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-zv4kt"

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

multus

manila-operator-controller-manager-67d996989d-qbghx

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a"

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

multus

placement-operator-controller-manager-8497b45c89-8xrtm

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

multus

ironic-operator-controller-manager-554564d7fc-db24j

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192"

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf"

openstack-operators

multus

ovn-operator-controller-manager-5955d8c787-zbd8b

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192"

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1"

openstack-operators

multus

keystone-operator-controller-manager-b4d948c87-zlj5w

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

multus

telemetry-operator-controller-manager-589c568786-9ljm5

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

manila-operator-controller-manager-67d996989d-qbghx

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

multus

ovn-operator-controller-manager-5955d8c787-zbd8b

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

multus

neutron-operator-controller-manager-6bd4687957-svmn2

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

telemetry-operator-controller-manager-589c568786-9ljm5

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06"

openstack-operators

multus

octavia-operator-controller-manager-659dc6bbfc-z4h54

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-zv4kt"

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838"

openstack-operators

multus

nova-operator-controller-manager-567668f5cf-sfjt8

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

swift-operator-controller-manager-68f46476f-tc9k2

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867"

openstack-operators

multus

ironic-operator-controller-manager-554564d7fc-db24j

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-v7jpx"

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf"

openstack-operators

multus

neutron-operator-controller-manager-6bd4687957-svmn2

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

multus

octavia-operator-controller-manager-659dc6bbfc-z4h54

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-v7jpx"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04"

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc"

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

test-operator-controller-manager-5dc6794d5b-96zg4

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-mpbc6"

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Failed

Failed to pull image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98": pull QPS exceeded

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Failed

Failed to pull image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98": pull QPS exceeded

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Failed

Error: ErrImagePull

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-mpbc6"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

test-operator-controller-manager-5dc6794d5b-96zg4

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04"

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-r8skw"

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-r8skw"

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-5v9q2"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-5v9q2"

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-znbb7"
(x2)

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98"
(x2)

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-znbb7"

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-95hjr"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-p9wwm"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-h8qrp"

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-p9wwm"

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-95hjr"

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-h8qrp"

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-m7drh"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-m7drh"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-4nn9q"

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x5)

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-jb4xp"

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x5)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x5)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-4nn9q"

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x5)

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-jb4xp"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x2)

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98"
(x2)

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98"

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 17.404s (17.404s including waiting). Image size: 190626789 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 18.821s (18.821s including waiting). Image size: 191605671 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 17.404s (17.404s including waiting). Image size: 190626789 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 18.821s (18.821s including waiting). Image size: 191605671 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be" in 18.86s (18.86s including waiting). Image size: 191991232 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be" in 18.86s (18.86s including waiting). Image size: 191991232 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" in 18.979s (18.979s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 20.42s (20.42s including waiting). Image size: 195315176 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc" in 18.751s (18.751s including waiting). Image size: 196099046 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 18.132s (18.132s including waiting). Image size: 192091569 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 18.952s (18.952s including waiting). Image size: 193023123 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192" in 18.992s (18.992s including waiting). Image size: 190114714 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 20.393s (20.393s including waiting). Image size: 191103449 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" in 18.979s (18.979s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 19.56s (19.56s including waiting). Image size: 191665087 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc" in 18.751s (18.751s including waiting). Image size: 196099046 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 19.56s (19.56s including waiting). Image size: 191665087 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 18.348s (18.348s including waiting). Image size: 191246784 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 18.952s (18.952s including waiting). Image size: 193023123 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 18.132s (18.132s including waiting). Image size: 192091569 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 18.348s (18.348s including waiting). Image size: 191246784 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" in 18.353s (18.353s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 19.311s (19.311s including waiting). Image size: 190376908 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 19.311s (19.311s including waiting). Image size: 190376908 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192" in 18.992s (18.992s including waiting). Image size: 190114714 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 19.846s (19.846s including waiting). Image size: 191425982 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 20.393s (20.393s including waiting). Image size: 191103449 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 20.42s (20.42s including waiting). Image size: 195315176 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" in 18.353s (18.353s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 19.846s (19.846s including waiting). Image size: 191425982 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 19.155s (19.155s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Started

Started container manager

openstack-operators

manila-operator-controller-manager-67d996989d-qbghx_80558fa4-17aa-4b89-9425-f3afbb8fb9ea

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-qbghx_80558fa4-17aa-4b89-9425-f3afbb8fb9ea became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a"

openstack-operators

neutron-operator-controller-manager-6bd4687957-svmn2_41b0a275-b5f2-4813-bf3c-eb54e416ff55

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-6bd4687957-svmn2_41b0a275-b5f2-4813-bf3c-eb54e416ff55 became leader

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 19.269s (19.269s including waiting). Image size: 190936524 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Started

Started container manager

openstack-operators

neutron-operator-controller-manager-6bd4687957-svmn2_41b0a275-b5f2-4813-bf3c-eb54e416ff55

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-6bd4687957-svmn2_41b0a275-b5f2-4813-bf3c-eb54e416ff55 became leader

openstack-operators

manila-operator-controller-manager-67d996989d-qbghx_80558fa4-17aa-4b89-9425-f3afbb8fb9ea

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-qbghx_80558fa4-17aa-4b89-9425-f3afbb8fb9ea became leader

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98" in 6.804s (6.804s including waiting). Image size: 188905403 bytes.

openstack-operators

placement-operator-controller-manager-8497b45c89-8xrtm_99c1ac29-31a7-42ae-b812-42023cdfefe5

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-8497b45c89-8xrtm_99c1ac29-31a7-42ae-b812-42023cdfefe5 became leader

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 19.155s (19.155s including waiting). Image size: 176351298 bytes.

openstack-operators

multus

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

multus

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Started

Started container manager

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zghgv

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98" in 6.804s (6.804s including waiting). Image size: 188905403 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a"

openstack-operators

multus

infra-operator-controller-manager-5f879c76b6-bv48m

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 19.269s (19.269s including waiting). Image size: 190936524 bytes.

openstack-operators

multus

infra-operator-controller-manager-5f879c76b6-bv48m

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

placement-operator-controller-manager-8497b45c89-8xrtm_99c1ac29-31a7-42ae-b812-42023cdfefe5

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-8497b45c89-8xrtm_99c1ac29-31a7-42ae-b812-42023cdfefe5 became leader

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 19.577s (19.577s including waiting). Image size: 193562469 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 19.577s (19.577s including waiting). Image size: 193562469 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Started

Started container manager

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-qbghx

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 20.043s (20.043s including waiting). Image size: 189413585 bytes.

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zghgv_1200cf00-a094-4352-9f3e-d98411b374b0

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-784b5bb6c5-zghgv_1200cf00-a094-4352-9f3e-d98411b374b0 became leader

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zghgv_1200cf00-a094-4352-9f3e-d98411b374b0

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-784b5bb6c5-zghgv_1200cf00-a094-4352-9f3e-d98411b374b0 became leader

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 20.043s (20.043s including waiting). Image size: 189413585 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-8xrtm

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-svmn2

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Created

Created container: manager

openstack-operators

swift-operator-controller-manager-68f46476f-tc9k2_23e36049-d5ee-439d-a1f0-cdfa6f2ab419

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-68f46476f-tc9k2_23e36049-d5ee-439d-a1f0-cdfa6f2ab419 became leader

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-28hdf

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Created

Created container: manager

openstack-operators

watcher-operator-controller-manager-bccc79885-96xg2_9bd6f392-56ae-4e3b-b2f7-b651750d0846

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-96xg2_9bd6f392-56ae-4e3b-b2f7-b651750d0846 became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Created

Created container: operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Started

Started container operator

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-sfjt8

Started

Started container manager

openstack-operators

mariadb-operator-controller-manager-6994f66f48-28hdf_1da2db60-3dce-4b62-9199-4729f5513007

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-6994f66f48-28hdf_1da2db60-3dce-4b62-9199-4729f5513007 became leader

openstack-operators

keystone-operator-controller-manager-b4d948c87-zlj5w_cd09d96b-1d23-4b67-b7cc-bc4dc5fae919

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-b4d948c87-zlj5w_cd09d96b-1d23-4b67-b7cc-bc4dc5fae919 became leader

openstack-operators

test-operator-controller-manager-5dc6794d5b-96zg4_199d95ab-e2a8-4ac6-b016-d6633dc4a80b

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-5dc6794d5b-96zg4_199d95ab-e2a8-4ac6-b016-d6633dc4a80b became leader

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Created

Created container: manager

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Created

Created container: manager

openstack-operators

heat-operator-controller-manager-69f49c598c-75df9_e209d742-f48a-4d65-a83d-a14c3e950065

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-69f49c598c-75df9_e209d742-f48a-4d65-a83d-a14c3e950065 became leader

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-75df9

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Started

Started container manager

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-m52ng_ef9cfd27-ba24-4708-8924-d18eb26b4b52

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-m52ng_ef9cfd27-ba24-4708-8924-d18eb26b4b52 became leader

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-z4h54_5ae14391-fc11-4e56-8a42-1f298f4c205b

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-659dc6bbfc-z4h54_5ae14391-fc11-4e56-8a42-1f298f4c205b became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Created

Created container: manager

openstack-operators

designate-operator-controller-manager-6d8bf5c495-vq97j_40ff12ab-2709-46bb-80a9-c0ea43f26443

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-6d8bf5c495-vq97j_40ff12ab-2709-46bb-80a9-c0ea43f26443 became leader

openstack-operators

telemetry-operator-controller-manager-589c568786-9ljm5_2d14c0f0-78cc-4803-aa06-040bfc21f977

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-589c568786-9ljm5_2d14c0f0-78cc-4803-aa06-040bfc21f977 became leader

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-gmljt

Created

Created container: manager

openstack-operators

telemetry-operator-controller-manager-589c568786-9ljm5_2d14c0f0-78cc-4803-aa06-040bfc21f977

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-589c568786-9ljm5_2d14c0f0-78cc-4803-aa06-040bfc21f977 became leader

openstack-operators

designate-operator-controller-manager-6d8bf5c495-vq97j_40ff12ab-2709-46bb-80a9-c0ea43f26443

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-6d8bf5c495-vq97j_40ff12ab-2709-46bb-80a9-c0ea43f26443 became leader

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Created

Created container: manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-tc9k2

Created

Created container: manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Started

Started container operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-vrbmh

Created

Created container: operator

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Created

Created container: manager

openstack-operators

heat-operator-controller-manager-69f49c598c-75df9_e209d742-f48a-4d65-a83d-a14c3e950065

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-69f49c598c-75df9_e209d742-f48a-4d65-a83d-a14c3e950065 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-9ljm5

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-zbd8b

Created

Created container: manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Started

Started container manager

openstack-operators

watcher-operator-controller-manager-bccc79885-96xg2_9bd6f392-56ae-4e3b-b2f7-b651750d0846

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-96xg2_9bd6f392-56ae-4e3b-b2f7-b651750d0846 became leader

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Created

Created container: manager

openstack-operators

keystone-operator-controller-manager-b4d948c87-zlj5w_cd09d96b-1d23-4b67-b7cc-bc4dc5fae919

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-b4d948c87-zlj5w_cd09d96b-1d23-4b67-b7cc-bc4dc5fae919 became leader

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Started

Started container manager

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-vq97j

Created

Created container: manager

openstack-operators

test-operator-controller-manager-5dc6794d5b-96zg4_199d95ab-e2a8-4ac6-b016-d6633dc4a80b

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-5dc6794d5b-96zg4_199d95ab-e2a8-4ac6-b016-d6633dc4a80b became leader

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-zlj5w

Created

Created container: manager

openstack-operators

mariadb-operator-controller-manager-6994f66f48-28hdf_1da2db60-3dce-4b62-9199-4729f5513007

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-6994f66f48-28hdf_1da2db60-3dce-4b62-9199-4729f5513007 became leader

openstack-operators

swift-operator-controller-manager-68f46476f-tc9k2_23e36049-d5ee-439d-a1f0-cdfa6f2ab419

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-68f46476f-tc9k2_23e36049-d5ee-439d-a1f0-cdfa6f2ab419 became leader

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Created

Created container: manager

openstack-operators

barbican-operator-controller-manager-868647ff47-rngmn_aef69b62-7510-4101-852d-43b06e0b0794

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-868647ff47-rngmn_aef69b62-7510-4101-852d-43b06e0b0794 became leader

openstack-operators

ovn-operator-controller-manager-5955d8c787-zbd8b_d01f65ee-ad6f-4d52-a644-cb107e76b18c

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-5955d8c787-zbd8b_d01f65ee-ad6f-4d52-a644-cb107e76b18c became leader

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-96zg4

Started

Started container manager

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-z4h54_5ae14391-fc11-4e56-8a42-1f298f4c205b

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-659dc6bbfc-z4h54_5ae14391-fc11-4e56-8a42-1f298f4c205b became leader

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-m52ng_ef9cfd27-ba24-4708-8924-d18eb26b4b52

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-m52ng_ef9cfd27-ba24-4708-8924-d18eb26b4b52 became leader

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-m52ng

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-rngmn

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-db24j

Started

Started container manager

openstack-operators

ovn-operator-controller-manager-5955d8c787-zbd8b_d01f65ee-ad6f-4d52-a644-cb107e76b18c

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-5955d8c787-zbd8b_d01f65ee-ad6f-4d52-a644-cb107e76b18c became leader

openstack-operators

barbican-operator-controller-manager-868647ff47-rngmn_aef69b62-7510-4101-852d-43b06e0b0794

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-868647ff47-rngmn_aef69b62-7510-4101-852d-43b06e0b0794 became leader

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-z4h54

Started

Started container manager

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-gmljt_92a3d03b-20f3-4fad-a416-827c4864681c

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-5b9b8895d5-gmljt_92a3d03b-20f3-4fad-a416-827c4864681c became leader

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-vrbmh_990703ed-89c4-475f-b1fa-7248e2bee7e0

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-vrbmh_990703ed-89c4-475f-b1fa-7248e2bee7e0 became leader

openstack-operators

nova-operator-controller-manager-567668f5cf-sfjt8_a028d414-3506-4813-a2f2-897d31574319

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-567668f5cf-sfjt8_a028d414-3506-4813-a2f2-897d31574319 became leader

openstack-operators

ironic-operator-controller-manager-554564d7fc-db24j_7dfed9bf-6d04-474a-adc0-c1d5544e5008

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-554564d7fc-db24j_7dfed9bf-6d04-474a-adc0-c1d5544e5008 became leader

openstack-operators

ironic-operator-controller-manager-554564d7fc-db24j_7dfed9bf-6d04-474a-adc0-c1d5544e5008

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-554564d7fc-db24j_7dfed9bf-6d04-474a-adc0-c1d5544e5008 became leader

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-gmljt_92a3d03b-20f3-4fad-a416-827c4864681c

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-5b9b8895d5-gmljt_92a3d03b-20f3-4fad-a416-827c4864681c became leader

openstack-operators

nova-operator-controller-manager-567668f5cf-sfjt8_a028d414-3506-4813-a2f2-897d31574319

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-567668f5cf-sfjt8_a028d414-3506-4813-a2f2-897d31574319 became leader

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-vrbmh_990703ed-89c4-475f-b1fa-7248e2bee7e0

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-vrbmh_990703ed-89c4-475f-b1fa-7248e2bee7e0 became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 4.211s (4.211s including waiting). Image size: 190527593 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 4.211s (4.211s including waiting). Image size: 190527593 bytes.

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b92xw4j_0fd72a1a-ae2f-4ebe-ab03-862a6babe183

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-579b7786b92xw4j_0fd72a1a-ae2f-4ebe-ab03-862a6babe183 became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Created

Created container: manager

openstack-operators

infra-operator-controller-manager-5f879c76b6-bv48m_b45f34a2-1758-46bc-b504-f4678ab2983a

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-5f879c76b6-bv48m_b45f34a2-1758-46bc-b504-f4678ab2983a became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 4.348s (4.348s including waiting). Image size: 192826291 bytes.

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b92xw4j_0fd72a1a-ae2f-4ebe-ab03-862a6babe183

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-579b7786b92xw4j_0fd72a1a-ae2f-4ebe-ab03-862a6babe183 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Created

Created container: manager

openstack-operators

infra-operator-controller-manager-5f879c76b6-bv48m_b45f34a2-1758-46bc-b504-f4678ab2983a

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-5f879c76b6-bv48m_b45f34a2-1758-46bc-b504-f4678ab2983a became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 4.348s (4.348s including waiting). Image size: 192826291 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-bv48m

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b92xw4j

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" already present on machine

openstack-operators

multus

openstack-operator-controller-manager-5dc486cffc-rbqzr

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" already present on machine

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

Started

Started container manager

openstack-operators

openstack-operator-controller-manager-5dc486cffc-rbqzr_82222bf5-5615-471a-8d1b-202e8d44b851

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-5dc486cffc-rbqzr_82222bf5-5615-471a-8d1b-202e8d44b851 became leader

openstack-operators

multus

openstack-operator-controller-manager-5dc486cffc-rbqzr

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

openstack-operator-controller-manager-5dc486cffc-rbqzr_82222bf5-5615-471a-8d1b-202e8d44b851

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-5dc486cffc-rbqzr_82222bf5-5615-471a-8d1b-202e8d44b851 became leader

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-rbqzr

Started

Started container manager

openstack

cert-manager-certificates-trigger

rootca-public

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-public

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-public" not found
(x2)

openstack

cert-manager-issuers

rootca-public

ErrInitIssuer

Error initializing issuer: secrets "rootca-public" not found

openstack

cert-manager-certificates-issuing

rootca-public

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-public-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-internal" not found

openstack

cert-manager-certificaterequests-issuer-vault

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-public

Generated

Stored new private key in temporary Secret resource "rootca-public-g7q7l"

openstack

cert-manager-certificates-request-manager

rootca-public

Requested

Created new CertificateRequest resource "rootca-public-1"
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrInitIssuer

Error initializing issuer: secrets "rootca-internal" not found

openstack

cert-manager-certificates-trigger

rootca-internal

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

rootca-internal

Requested

Created new CertificateRequest resource "rootca-internal-1"

openstack

cert-manager-certificates-key-manager

rootca-internal

Generated

Stored new private key in temporary Secret resource "rootca-internal-l9c2h"

openstack

cert-manager-certificates-trigger

rootca-libvirt

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificates-issuing

rootca-internal

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rootca-internal-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrInitIssuer

Error initializing issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificates-key-manager

rootca-libvirt

Generated

Stored new private key in temporary Secret resource "rootca-libvirt-fwhxh"

openstack

cert-manager-certificates-request-manager

rootca-libvirt

Requested

Created new CertificateRequest resource "rootca-libvirt-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rootca-libvirt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

rootca-libvirt

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

rootca-ovn

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrInitIssuer

Error initializing issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificaterequests-issuer-vault

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-ovn" not found
(x3)

openstack

cert-manager-issuers

rootca-public

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

rootca-ovn

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

rootca-ovn-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rootca-ovn

Requested

Created new CertificateRequest resource "rootca-ovn-1"

openstack

cert-manager-certificates-key-manager

rootca-ovn

Generated

Stored new private key in temporary Secret resource "rootca-ovn-ph97v"

openstack

cert-manager-certificates-trigger

rabbitmq-svc

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-bc7f9869-4lgxt

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

rabbitmq-svc

Requested

Created new CertificateRequest resource "rabbitmq-svc-1"

openstack

cert-manager-certificates-key-manager

rabbitmq-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-svc-pkqss"

openstack

kubelet

dnsmasq-dns-bc7f9869-4lgxt

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rabbitmq-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7d4c486879 to 1

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-7d4c486879

SuccessfulCreate

Created pod: dnsmasq-dns-7d4c486879-5m7lz

openstack

replicaset-controller

dnsmasq-dns-bc7f9869

SuccessfulCreate

Created pod: dnsmasq-dns-bc7f9869-4lgxt

openstack

cert-manager-certificates-key-manager

rabbitmq-cell1-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-pm8sz"

openstack

cert-manager-certificates-trigger

rabbitmq-cell1-svc

Issuing

Issuing certificate as Secret does not exist
(x3)

openstack

cert-manager-issuers

rootca-internal

KeyPairVerified

Signing CA verified

openstack

metallb-controller

dnsmasq-dns

IPAllocated

Assigned IP ["192.168.122.80"]
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-bc7f9869 to 1

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rabbitmq-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

rabbitmq-cell1-svc

Requested

Created new CertificateRequest resource "rabbitmq-cell1-svc-1"

openstack

multus

dnsmasq-dns-7d4c486879-5m7lz

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7d4c486879-5m7lz

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"
(x3)

openstack

cert-manager-issuers

rootca-libvirt

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificates-issuing

rabbitmq-cell1-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

rabbitmq-svc

Issuing

The certificate has been successfully issued

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap

openstack

replicaset-controller

dnsmasq-dns-6974cff98c

SuccessfulCreate

Created pod: dnsmasq-dns-6974cff98c-2t99f

openstack

replicaset-controller

dnsmasq-dns-7d4c486879

SuccessfulDelete

Deleted pod: dnsmasq-dns-7d4c486879-5m7lz

openstack

replicaset-controller

dnsmasq-dns-bc7f9869

SuccessfulDelete

Deleted pod: dnsmasq-dns-bc7f9869-4lgxt
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificates-request-manager

galera-openstack-svc

Requested

Created new CertificateRequest resource "galera-openstack-svc-1"

openstack

cert-manager-certificates-key-manager

galera-openstack-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-svc-4j2jq"

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

galera-openstack-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-default-user of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.RoleBinding

default

endpoint-controller

rabbitmq-cell1

FailedToCreateEndpoint

Failed to create endpoint for service openstack/rabbitmq-cell1: endpoints "rabbitmq-cell1" already exists

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

metallb-controller

rabbitmq-cell1

IPAllocated

Assigned IP ["172.17.0.86"]

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1 of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-nodes of Type *v1.Service

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7c45d57b9c to 1 from 0

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-bc7f9869 to 0 from 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-6974cff98c to 1 from 0

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-7d4c486879 to 0 from 1
(x3)

openstack

cert-manager-issuers

rootca-ovn

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-7c45d57b9c

SuccessfulCreate

Created pod: dnsmasq-dns-7c45d57b9c-k22s7

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-nodes of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq of Type *v1.Service

openstack

metallb-controller

rabbitmq

IPAllocated

Assigned IP ["172.17.0.85"]
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-erlang-cookie of Type *v1.Secret

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

persistence-rabbitmq-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0"

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-default-user of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-plugins-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.RoleBinding

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet

openstack

cert-manager-certificates-trigger

galera-openstack-svc

Issuing

Issuing certificate as Secret does not exist

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

galera-openstack-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

dnsmasq-dns-7c45d57b9c-k22s7

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Pod openstack-galera-0 in StatefulSet openstack-galera successful

openstack

multus

dnsmasq-dns-6974cff98c-2t99f

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack

cert-manager-certificates-issuing

galera-openstack-svc

Issuing

The certificate has been successfully issued
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful

openstack

cert-manager-certificates-trigger

galera-openstack-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

galera-openstack-cell1-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-25nfv"

openstack

cert-manager-certificates-request-manager

galera-openstack-cell1-svc

Requested

Created new CertificateRequest resource "galera-openstack-cell1-svc-1"

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"

openstack

cert-manager-certificates-issuing

galera-openstack-cell1-svc

Issuing

The certificate has been successfully issued

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-key-manager

memcached-svc

Generated

Stored new private key in temporary Secret resource "memcached-svc-4xm54"

openstack

cert-manager-certificates-trigger

memcached-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

memcached-svc

Requested

Created new CertificateRequest resource "memcached-svc-1"

openstack

cert-manager-certificates-issuing

memcached-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

memcached

SuccessfulCreate

create Pod memcached-0 in StatefulSet memcached successful

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

persistence-rabbitmq-cell1-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0"

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

persistence-rabbitmq-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-f81fa97a-3a54-4f13-a867-f22d9416fbaa

openstack

cert-manager-certificaterequests-issuer-vault

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

memcached-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ovn-metrics

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

ovn-metrics-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ovn-metrics

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

persistence-rabbitmq-cell1-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-28cb1ecd-6ba3-422b-a334-521132dedf93

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ovn-metrics

Generated

Stored new private key in temporary Secret resource "ovn-metrics-knrcd"

openstack

cert-manager-certificaterequests-issuer-venafi

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

mysql-db-openstack-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0"

openstack

cert-manager-certificates-request-manager

ovn-metrics

Requested

Created new CertificateRequest resource "ovn-metrics-1"

openstack

cert-manager-certificates-key-manager

ovnnorthd-ovndbs

Generated

Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-9bxvz"

openstack

cert-manager-certificates-key-manager

ovndbcluster-nb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-69bsp"

openstack

cert-manager-certificates-trigger

ovncontroller-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

ovndbcluster-nb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

ovnnorthd-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovndbcluster-nb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovndbcluster-nb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-venafi

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovnnorthd-ovndbs

Requested

Created new CertificateRequest resource "ovnnorthd-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

mysql-db-openstack-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-62c45296-58e6-423d-9cca-31bf5b6d67c8

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ovncontroller-ovndbs

Generated

Stored new private key in temporary Secret resource "ovncontroller-ovndbs-7wdwg"

openstack

cert-manager-certificates-request-manager

ovncontroller-ovndbs

Requested

Created new CertificateRequest resource "ovncontroller-ovndbs-1"

openstack

cert-manager-certificates-trigger

neutron-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

mysql-db-openstack-cell1-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0"

openstack

cert-manager-certificates-key-manager

neutron-ovndbs

Generated

Stored new private key in temporary Secret resource "neutron-ovndbs-c2djd"

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

mysql-db-openstack-cell1-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-bec77aba-dbd4-474b-9c5a-cb1a27b429a1

openstack

cert-manager-certificaterequests-approver

ovncontroller-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-approver

ovnnorthd-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

neutron-ovndbs

Requested

Created new CertificateRequest resource "neutron-ovndbs-1"

openstack

cert-manager-certificates-issuing

ovndbcluster-nb-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

ovnnorthd-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0"

openstack

cert-manager-certificates-trigger

ovndbcluster-sb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

neutron-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

daemonset-controller

ovn-controller-ovs

SuccessfulCreate

Created pod: ovn-controller-ovs-86mtg

openstack

daemonset-controller

ovn-controller

SuccessfulCreate

Created pod: ovn-controller-5kh8v

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-ee5ad954-894e-4ab1-8df1-46fd7b431ce0

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovndbcluster-sb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1"

openstack

cert-manager-certificates-issuing

neutron-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ovndbcluster-sb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-8jvpt"

openstack

cert-manager-certificates-issuing

ovncontroller-ovndbs

Issuing

The certificate has been successfully issued

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0"

openstack

cert-manager-certificates-issuing

ovndbcluster-sb-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success

openstack

cert-manager-certificaterequests-approver

ovndbcluster-sb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-f6bcea25-7406-4a85-8ba1-e12b630dfa9f

openstack

multus

rabbitmq-server-0

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack

kubelet

rabbitmq-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20"

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Started

Started container init

openstack

multus

openstack-cell1-galera-0

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 17.734s (17.734s including waiting). Image size: 679396694 bytes.

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Created

Created container: init

openstack

multus

memcached-0

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 17.509s (17.509s including waiting). Image size: 679396694 bytes.

openstack

kubelet

dnsmasq-dns-bc7f9869-4lgxt

Started

Started container init

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Created

Created container: init

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Started

Started container init

openstack

kubelet

dnsmasq-dns-bc7f9869-4lgxt

Created

Created container: init

openstack

kubelet

dnsmasq-dns-bc7f9869-4lgxt

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 21.372s (21.372s including waiting). Image size: 679396694 bytes.

openstack

kubelet

dnsmasq-dns-7d4c486879-5m7lz

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 20.993s (20.993s including waiting). Image size: 679396694 bytes.

openstack

kubelet

dnsmasq-dns-7d4c486879-5m7lz

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7d4c486879-5m7lz

Started

Started container init

openstack

multus

rabbitmq-cell1-server-0

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack

multus

openstack-galera-0

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack

multus

ovn-controller-ovs-86mtg

AddedInterface

Add tenant [172.19.0.30/24] from openstack/tenant

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add internalapi [172.17.0.31/24] from openstack/internalapi

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:2847fc8e7f911c23656f50e02d4fd6275e9edecdc19e9d04cc999c0fcc5bf917"

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add internalapi [172.17.0.30/24] from openstack/internalapi

openstack

kubelet

ovn-controller-5kh8v

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417"

openstack

kubelet

rabbitmq-cell1-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20"

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add eth0 [10.128.0.171/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Started

Started container dnsmasq-dns

openstack

multus

ovn-controller-ovs-86mtg

AddedInterface

Add eth0 [10.128.0.170/23] from ovn-kubernetes

openstack

kubelet

openstack-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658"

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add eth0 [10.128.0.172/23] from ovn-kubernetes

openstack

kubelet

memcached-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:d6c93f70d8b142180af00baccabe84529baba1bb1e8bfd9bc2b58efb09aef590"

openstack

multus

ovn-controller-ovs-86mtg

AddedInterface

Add datacentre [] from openstack/datacentre

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Created

Created container: dnsmasq-dns

openstack

multus

ovn-controller-ovs-86mtg

AddedInterface

Add ironic [172.20.1.30/24] from openstack/ironic

openstack

kubelet

openstack-cell1-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658"

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:05c8a64428215567969452413877b06edfb244f075c0161cf3059c3a27f8df85"

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

ovn-controller-ovs-86mtg

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c"

openstack

multus

ovn-controller-5kh8v

AddedInterface

Add eth0 [10.128.0.169/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-6974cff98c to 0 from 1

openstack

kubelet

dnsmasq-dns-6974cff98c-2t99f

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-6974cff98c

SuccessfulDelete

Deleted pod: dnsmasq-dns-6974cff98c-2t99f

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" in 8.783s (8.783s including waiting). Image size: 304909899 bytes.

openstack

kubelet

rabbitmq-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" in 10.934s (10.934s including waiting). Image size: 304909899 bytes.

openstack

kubelet

memcached-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:d6c93f70d8b142180af00baccabe84529baba1bb1e8bfd9bc2b58efb09aef590" in 9.052s (9.052s including waiting). Image size: 277861580 bytes.

openstack

kubelet

openstack-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:05c8a64428215567969452413877b06edfb244f075c0161cf3059c3a27f8df85" in 8.955s (8.955s including waiting). Image size: 347271461 bytes.

openstack

kubelet

openstack-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" in 9.006s (9.006s including waiting). Image size: 429866819 bytes.

openstack

kubelet

openstack-cell1-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" in 9.125s (9.125s including waiting). Image size: 429866819 bytes.

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:2847fc8e7f911c23656f50e02d4fd6275e9edecdc19e9d04cc999c0fcc5bf917" in 8.372s (8.372s including waiting). Image size: 347271462 bytes.

openstack

kubelet

ovn-controller-ovs-86mtg

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c" in 8.165s (8.165s including waiting). Image size: 324698130 bytes.

openstack

kubelet

ovn-controller-5kh8v

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417" in 8.569s (8.569s including waiting). Image size: 347092937 bytes.

openstack

kubelet

openstack-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

memcached-0

Started

Started container memcached

openstack

kubelet

memcached-0

Created

Created container: memcached

openstack

kubelet

ovsdbserver-sb-0

Started

Started container ovsdbserver-sb

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c"

openstack

kubelet

ovsdbserver-nb-0

Started

Started container ovsdbserver-nb

openstack

kubelet

ovn-controller-5kh8v

Created

Created container: ovn-controller

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c"

openstack

kubelet

ovn-controller-ovs-86mtg

Started

Started container ovsdb-server-init

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: ovsdbserver-sb

openstack

kubelet

ovn-controller-5kh8v

Started

Started container ovn-controller

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: ovsdbserver-nb

openstack

kubelet

ovn-controller-ovs-86mtg

Created

Created container: ovsdb-server-init

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

openstack-cell1-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container setup-container

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: setup-container

openstack

kubelet

rabbitmq-server-0

Started

Started container setup-container

openstack

kubelet

rabbitmq-server-0

Created

Created container: setup-container

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: openstack-network-exporter

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-5c685c7df5 to 1

openstack

replicaset-controller

dnsmasq-dns-5c685c7df5

SuccessfulCreate

Created pod: dnsmasq-dns-5c685c7df5-nbjbv

openstack

kubelet

ovsdbserver-sb-0

Started

Started container openstack-network-exporter

openstack

daemonset-controller

ovn-controller-metrics

SuccessfulCreate

Created pod: ovn-controller-metrics-5kqv6

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 1.971s (1.971s including waiting). Image size: 165206333 bytes.

openstack

kubelet

ovn-controller-ovs-86mtg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c" already present on machine

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 1.974s (1.974s including waiting). Image size: 165206333 bytes.

openstack

kubelet

ovn-controller-ovs-86mtg

Created

Created container: ovsdb-server

openstack

kubelet

ovn-controller-ovs-86mtg

Started

Started container ovsdb-server

openstack

kubelet

ovn-controller-ovs-86mtg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c" already present on machine

openstack

kubelet

ovsdbserver-nb-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovn-controller-ovs-86mtg

Created

Created container: ovs-vswitchd

openstack

kubelet

ovn-controller-ovs-86mtg

Started

Started container ovs-vswitchd

openstack

kubelet

ovn-controller-metrics-5kqv6

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine

openstack

replicaset-controller

dnsmasq-dns-5c685c7df5

SuccessfulDelete

Deleted pod: dnsmasq-dns-5c685c7df5-nbjbv

openstack

multus

ovn-controller-metrics-5kqv6

AddedInterface

Add eth0 [10.128.0.173/23] from ovn-kubernetes

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-5c685c7df5 to 0 from 1

openstack

replicaset-controller

dnsmasq-dns-65c6cc445f

SuccessfulCreate

Created pod: dnsmasq-dns-65c6cc445f-5w2gf

openstack

kubelet

dnsmasq-dns-65c6cc445f-5w2gf

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-65c6cc445f-5w2gf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-65c6cc445f-5w2gf

Started

Started container dnsmasq-dns

openstack

kubelet

openstack-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

openstack-galera-0

Created

Created container: galera

openstack

kubelet

dnsmasq-dns-65c6cc445f-5w2gf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-65c6cc445f-5w2gf

Created

Created container: init

openstack

kubelet

ovn-controller-metrics-5kqv6

Created

Created container: openstack-network-exporter

openstack

kubelet

openstack-galera-0

Started

Started container galera

openstack

kubelet

ovn-controller-metrics-5kqv6

Started

Started container openstack-network-exporter

openstack

multus

dnsmasq-dns-65c6cc445f-5w2gf

AddedInterface

Add eth0 [10.128.0.175/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-65c6cc445f-5w2gf

Started

Started container init

openstack

kubelet

openstack-cell1-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine
(x2)

openstack

persistentvolume-controller

swift-swift-storage-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificates-trigger

swift-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

statefulset-controller

ovn-northd

SuccessfulCreate

create Pod ovn-northd-0 in StatefulSet ovn-northd successful

openstack

cert-manager-certificates-key-manager

swift-internal-svc

Generated

Stored new private key in temporary Secret resource "swift-internal-svc-d7hgj"

openstack

replicaset-controller

dnsmasq-dns-5c55964f59

SuccessfulCreate

Created pod: dnsmasq-dns-5c55964f59-4n57j

openstack

metallb-controller

swift-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

swift-swift-storage-0

Provisioning

External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0"

openstack

persistentvolume-controller

swift-swift-storage-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Pod swift-storage-0 in StatefulSet swift-storage successful

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: galera

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success

openstack

kubelet

openstack-cell1-galera-0

Started

Started container galera
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

replicaset-controller

dnsmasq-dns-65c6cc445f

SuccessfulDelete

Deleted pod: dnsmasq-dns-65c6cc445f-5w2gf

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

swift-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

ovn-northd-0

AddedInterface

Add eth0 [10.128.0.176/23] from ovn-kubernetes

openstack

multus

dnsmasq-dns-5c55964f59-4n57j

AddedInterface

Add eth0 [10.128.0.177/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

swift-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-5c55964f59-4n57j

Started

Started container init

openstack

kubelet

dnsmasq-dns-5c55964f59-4n57j

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-key-manager

swift-public-svc

Generated

Stored new private key in temporary Secret resource "swift-public-svc-7ncvt"

openstack

cert-manager-certificates-request-manager

swift-public-svc

Requested

Created new CertificateRequest resource "swift-public-svc-1"

openstack

kubelet

dnsmasq-dns-65c6cc445f-5w2gf

Killing

Stopping container dnsmasq-dns

openstack

cert-manager-certificates-issuing

swift-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

swift-swift-storage-0

ProvisioningSucceeded

Successfully provisioned volume pvc-c269bc7e-f3d0-4828-8ba4-a192dd94a207

openstack

cert-manager-certificates-issuing

swift-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

swift-internal-svc

Requested

Created new CertificateRequest resource "swift-internal-svc-1"

openstack

kubelet

dnsmasq-dns-5c55964f59-4n57j

Created

Created container: init

openstack

kubelet

ovn-northd-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b5dba6c3776a5c366db4ceedbbb445c1f29b78cd2b0159ff41b9ea063a474a93"

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

swift-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5c55964f59-4n57j

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-trigger

swift-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

swift-public-route

Requested

Created new CertificateRequest resource "swift-public-route-1"

openstack

kubelet

dnsmasq-dns-5c55964f59-4n57j

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

swift-ring-rebalance

SuccessfulCreate

Created pod: swift-ring-rebalance-th4vs

openstack

cert-manager-certificaterequests-approver

swift-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

swift-public-route

Generated

Stored new private key in temporary Secret resource "swift-public-route-9c7nb"

openstack

kubelet

dnsmasq-dns-5c55964f59-4n57j

Created

Created container: dnsmasq-dns

openstack

kubelet

ovn-northd-0

Created

Created container: ovn-northd

openstack

cert-manager-certificates-issuing

swift-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

ovn-northd-0

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine

openstack

kubelet

ovn-northd-0

Started

Started container ovn-northd

openstack

kubelet

ovn-northd-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b5dba6c3776a5c366db4ceedbbb445c1f29b78cd2b0159ff41b9ea063a474a93" in 2.904s (2.905s including waiting). Image size: 347268557 bytes.

openstack

kubelet

ovn-northd-0

Started

Started container openstack-network-exporter
(x6)

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetUpdated

Updated DaemonSet.apps/node-ca -n openshift-image-registry because it changed

openstack

multus

swift-ring-rebalance-th4vs

AddedInterface

Add eth0 [10.128.0.179/23] from ovn-kubernetes

openstack

kubelet

swift-ring-rebalance-th4vs

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123"

openstack

kubelet

ovn-northd-0

Created

Created container: openstack-network-exporter

openstack

job-controller

glance-db-create

SuccessfulCreate

Created pod: glance-db-create-62w87

openstack

job-controller

glance-738d-account-create-update

SuccessfulCreate

Created pod: glance-738d-account-create-update-p9hmm
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq of Type *v1.Service

openstack

job-controller

placement-b69d-account-create-update

SuccessfulCreate

Created pod: placement-b69d-account-create-update-7dq92

openstack

job-controller

placement-db-create

SuccessfulCreate

Created pod: placement-db-create-rfgw2

openstack

kubelet

swift-ring-rebalance-th4vs

Started

Started container swift-ring-rebalance

openstack

kubelet

swift-ring-rebalance-th4vs

Created

Created container: swift-ring-rebalance

openstack

kubelet

swift-ring-rebalance-th4vs

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123" in 4.231s (4.231s including waiting). Image size: 500498707 bytes.

openstack

job-controller

keystone-7814-account-create-update

SuccessfulCreate

Created pod: keystone-7814-account-create-update-vkdnw
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-server of Type *v1.StatefulSet

openstack

job-controller

keystone-db-create

SuccessfulCreate

Created pod: keystone-db-create-f9qxr

openstack

multus

glance-db-create-62w87

AddedInterface

Add eth0 [10.128.0.181/23] from ovn-kubernetes

openstack

multus

keystone-db-create-f9qxr

AddedInterface

Add eth0 [10.128.0.182/23] from ovn-kubernetes
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1 of Type *v1.Service
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-server of Type *v1.StatefulSet

openstack

kubelet

glance-db-create-62w87

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine
(x5)

openstack

kubelet

swift-storage-0

FailedMount

MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found

openstack

kubelet

glance-738d-account-create-update-p9hmm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

glance-738d-account-create-update-p9hmm

AddedInterface

Add eth0 [10.128.0.180/23] from ovn-kubernetes

openstack

kubelet

keystone-7814-account-create-update-vkdnw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

glance-738d-account-create-update-p9hmm

Started

Started container mariadb-account-create-update

openstack

kubelet

keystone-7814-account-create-update-vkdnw

Started

Started container mariadb-account-create-update

openstack

multus

placement-db-create-rfgw2

AddedInterface

Add eth0 [10.128.0.184/23] from ovn-kubernetes

openstack

kubelet

placement-db-create-rfgw2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

placement-b69d-account-create-update-7dq92

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

dnsmasq-dns-7c45d57b9c-k22s7

Killing

Stopping container dnsmasq-dns

openstack

multus

placement-b69d-account-create-update-7dq92

AddedInterface

Add eth0 [10.128.0.185/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-7c45d57b9c

SuccessfulDelete

Deleted pod: dnsmasq-dns-7c45d57b9c-k22s7

openstack

multus

keystone-7814-account-create-update-vkdnw

AddedInterface

Add eth0 [10.128.0.183/23] from ovn-kubernetes

openstack

kubelet

glance-db-create-62w87

Started

Started container mariadb-database-create

openstack

kubelet

glance-db-create-62w87

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-7814-account-create-update-vkdnw

Created

Created container: mariadb-account-create-update

openstack

kubelet

glance-738d-account-create-update-p9hmm

Created

Created container: mariadb-account-create-update

openstack

kubelet

keystone-db-create-f9qxr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

keystone-db-create-f9qxr

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-db-create-f9qxr

Started

Started container mariadb-database-create

openstack

kubelet

placement-db-create-rfgw2

Created

Created container: mariadb-database-create

openstack

kubelet

placement-db-create-rfgw2

Started

Started container mariadb-database-create

openstack

kubelet

placement-b69d-account-create-update-7dq92

Created

Created container: mariadb-account-create-update

openstack

kubelet

placement-b69d-account-create-update-7dq92

Started

Started container mariadb-account-create-update

openstack

job-controller

keystone-db-create

Completed

Job completed

openstack

job-controller

glance-738d-account-create-update

Completed

Job completed

openstack

job-controller

placement-b69d-account-create-update

Completed

Job completed

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-qw6cm

openstack

job-controller

placement-db-create

Completed

Job completed

openstack

job-controller

glance-db-create

Completed

Job completed

openstack

multus

root-account-create-update-qw6cm

AddedInterface

Add eth0 [10.128.0.186/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-qw6cm

Created

Created container: mariadb-account-create-update

openstack

kubelet

root-account-create-update-qw6cm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

job-controller

keystone-7814-account-create-update

Completed

Job completed

openstack

kubelet

root-account-create-update-qw6cm

Started

Started container mariadb-account-create-update

openstack

job-controller

glance-db-sync

SuccessfulCreate

Created pod: glance-db-sync-f4vxh

openstack

multus

glance-db-sync-f4vxh

AddedInterface

Add eth0 [10.128.0.187/23] from ovn-kubernetes

openstack

multus

swift-storage-0

AddedInterface

Add eth0 [10.128.0.178/23] from ovn-kubernetes

openstack

multus

glance-db-sync-f4vxh

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

glance-db-sync-f4vxh

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416"

openstack

job-controller

swift-ring-rebalance

Completed

Job completed

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b"

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" in 1.264s (1.264s including waiting). Image size: 445458440 bytes.

openstack

kubelet

swift-storage-0

Created

Created container: account-reaper

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: account-replicator

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:a92ecb870f7cde5bbfe109e99367b4fb913fa3319837a8d7d34dafb1e6547875"

openstack

kubelet

swift-storage-0

Created

Created container: account-server

openstack

kubelet

swift-storage-0

Created

Created container: account-auditor

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" already present on machine

openstack

kubelet

swift-storage-0

Started

Started container account-replicator

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" already present on machine

openstack

kubelet

swift-storage-0

Started

Started container account-server

openstack

kubelet

swift-storage-0

Started

Started container account-reaper

openstack

kubelet

swift-storage-0

Started

Started container account-auditor

openstack

kubelet

swift-storage-0

Started

Started container container-server

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:a92ecb870f7cde5bbfe109e99367b4fb913fa3319837a8d7d34dafb1e6547875" in 1.197s (1.197s including waiting). Image size: 445474826 bytes.

openstack

kubelet

swift-storage-0

Created

Created container: container-replicator

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:a92ecb870f7cde5bbfe109e99367b4fb913fa3319837a8d7d34dafb1e6547875" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: container-server

openstack

kubelet

swift-storage-0

Started

Started container container-replicator

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-7zq2x

openstack

job-controller

ovn-controller-5kh8v-config

SuccessfulCreate

Created pod: ovn-controller-5kh8v-config-nwrnh

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: rabbitmq

openstack

kubelet

rabbitmq-server-0

Created

Created container: rabbitmq

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container rabbitmq

openstack

kubelet

rabbitmq-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" already present on machine

openstack

kubelet

ovn-controller-5kh8v-config-nwrnh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417" already present on machine
(x2)

openstack

kubelet

ovn-controller-5kh8v

Unhealthy

Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status

openstack

multus

ovn-controller-5kh8v-config-nwrnh

AddedInterface

Add eth0 [10.128.0.189/23] from ovn-kubernetes

openstack

kubelet

glance-db-sync-f4vxh

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" in 13.384s (13.384s including waiting). Image size: 983253362 bytes.

openstack

kubelet

rabbitmq-server-0

Started

Started container rabbitmq

openstack

multus

root-account-create-update-7zq2x

AddedInterface

Add eth0 [10.128.0.188/23] from ovn-kubernetes

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" already present on machine

openstack

kubelet

root-account-create-update-7zq2x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

root-account-create-update-7zq2x

Created

Created container: mariadb-account-create-update

openstack

kubelet

glance-db-sync-f4vxh

Started

Started container glance-db-sync

openstack

kubelet

root-account-create-update-7zq2x

Started

Started container mariadb-account-create-update

openstack

kubelet

ovn-controller-5kh8v-config-nwrnh

Created

Created container: ovn-config

openstack

kubelet

ovn-controller-5kh8v-config-nwrnh

Started

Started container ovn-config

openstack

kubelet

glance-db-sync-f4vxh

Created

Created container: glance-db-sync

openstack

replicaset-controller

dnsmasq-dns-6fbf68b9d7

SuccessfulCreate

Created pod: dnsmasq-dns-6fbf68b9d7-p96gq

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Started

Started container init

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Created

Created container: init

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

multus

dnsmasq-dns-6fbf68b9d7-p96gq

AddedInterface

Add eth0 [10.128.0.190/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Created

Created container: dnsmasq-dns

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

job-controller

ovn-controller-5kh8v-config

SuccessfulCreate

Created pod: ovn-controller-5kh8v-config-gtgbg

openstack

job-controller

ovn-controller-5kh8v-config

Completed

Job completed

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Started

Started container dnsmasq-dns

openstack

kubelet

ovn-controller-5kh8v-config-gtgbg

Created

Created container: ovn-config

openstack

kubelet

ovn-controller-5kh8v-config-gtgbg

Started

Started container ovn-config

openstack

kubelet

ovn-controller-5kh8v-config-gtgbg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417" already present on machine

openstack

multus

ovn-controller-5kh8v-config-gtgbg

AddedInterface

Add eth0 [10.128.0.191/23] from ovn-kubernetes

openstack

rabbitmq-cell1-server-0/rabbitmq_peer_discovery

pod/rabbitmq-cell1-server-0

Created

Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered

openstack

rabbitmq-server-0/rabbitmq_peer_discovery

pod/rabbitmq-server-0

Created

Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered

openstack

job-controller

ovn-controller-5kh8v-config

Completed

Job completed

openstack

job-controller

glance-db-sync

Completed

Job completed

openstack

replicaset-controller

dnsmasq-dns-6fbf68b9d7

SuccessfulDelete

Deleted pod: dnsmasq-dns-6fbf68b9d7-p96gq

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Unhealthy

Readiness probe failed: dial tcp 10.128.0.190:5353: connect: connection refused

openstack

kubelet

dnsmasq-dns-6fbf68b9d7-p96gq

Killing

Stopping container dnsmasq-dns
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

glance-default-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-issuing

glance-default-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

glance-default-internal-svc

Requested

Created new CertificateRequest resource "glance-default-internal-svc-1"

openstack

cert-manager-certificates-key-manager

glance-default-internal-svc

Generated

Stored new private key in temporary Secret resource "glance-default-internal-svc-vj74l"

openstack

cert-manager-certificates-trigger

glance-default-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

replicaset-controller

dnsmasq-dns-674c8b7b9c

SuccessfulCreate

Created pod: dnsmasq-dns-674c8b7b9c-9fj6z

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

glance-default-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

glance-default-public-svc

Requested

Created new CertificateRequest resource "glance-default-public-svc-1"

openstack

cert-manager-certificates-issuing

glance-default-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

glance-default-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-674c8b7b9c-9fj6z

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-674c8b7b9c-9fj6z

Started

Started container init

openstack

multus

dnsmasq-dns-674c8b7b9c-9fj6z

AddedInterface

Add eth0 [10.128.0.192/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-674c8b7b9c-9fj6z

Created

Created container: init

openstack

kubelet

dnsmasq-dns-674c8b7b9c-9fj6z

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-trigger

glance-default-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

glance-default-public-svc

Generated

Stored new private key in temporary Secret resource "glance-default-public-svc-sf6br"

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

glance-default-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

glance-default-public-route

Requested

Created new CertificateRequest resource "glance-default-public-route-1"

openstack

kubelet

dnsmasq-dns-674c8b7b9c-9fj6z

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-key-manager

glance-default-public-route

Generated

Stored new private key in temporary Secret resource "glance-default-public-route-55xj4"

openstack

cert-manager-certificates-trigger

glance-default-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

glance-default-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

dnsmasq-dns-674c8b7b9c-9fj6z

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

metallb-speaker

rabbitmq

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

neutron-db-create

SuccessfulCreate

Created pod: neutron-db-create-fcxq8

openstack

job-controller

cinder-db-create

SuccessfulCreate

Created pod: cinder-db-create-sns65

openstack

job-controller

neutron-828b-account-create-update

SuccessfulCreate

Created pod: neutron-828b-account-create-update-4bgnt

openstack

job-controller

cinder-3d67-account-create-update

SuccessfulCreate

Created pod: cinder-3d67-account-create-update-vkpgp

openstack

job-controller

keystone-db-sync

SuccessfulCreate

Created pod: keystone-db-sync-j2nkz

openstack

kubelet

cinder-db-create-sns65

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

cinder-db-create-sns65

AddedInterface

Add eth0 [10.128.0.193/23] from ovn-kubernetes

openstack

kubelet

cinder-3d67-account-create-update-vkpgp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

cinder-3d67-account-create-update-vkpgp

AddedInterface

Add eth0 [10.128.0.194/23] from ovn-kubernetes

openstack

kubelet

neutron-828b-account-create-update-4bgnt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

cinder-3d67-account-create-update-vkpgp

Created

Created container: mariadb-account-create-update

openstack

kubelet

neutron-db-create-fcxq8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

neutron-db-create-fcxq8

AddedInterface

Add eth0 [10.128.0.195/23] from ovn-kubernetes

openstack

multus

neutron-828b-account-create-update-4bgnt

AddedInterface

Add eth0 [10.128.0.196/23] from ovn-kubernetes

openstack

metallb-speaker

rabbitmq-cell1

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

cinder-db-create-sns65

Started

Started container mariadb-database-create

openstack

kubelet

cinder-db-create-sns65

Created

Created container: mariadb-database-create

openstack

kubelet

neutron-828b-account-create-update-4bgnt

Created

Created container: mariadb-account-create-update

openstack

multus

keystone-db-sync-j2nkz

AddedInterface

Add eth0 [10.128.0.197/23] from ovn-kubernetes

openstack

kubelet

keystone-db-sync-j2nkz

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8"

openstack

kubelet

cinder-3d67-account-create-update-vkpgp

Started

Started container mariadb-account-create-update

openstack

kubelet

neutron-db-create-fcxq8

Created

Created container: mariadb-database-create

openstack

kubelet

neutron-db-create-fcxq8

Started

Started container mariadb-database-create

openstack

kubelet

neutron-828b-account-create-update-4bgnt

Started

Started container mariadb-account-create-update

openstack

job-controller

cinder-db-create

Completed

Job completed

openstack

kubelet

dnsmasq-dns-5c55964f59-4n57j

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-5c55964f59

SuccessfulDelete

Deleted pod: dnsmasq-dns-5c55964f59-4n57j

openstack

kubelet

keystone-db-sync-j2nkz

Started

Started container keystone-db-sync

openstack

kubelet

keystone-db-sync-j2nkz

Created

Created container: keystone-db-sync

openstack

kubelet

keystone-db-sync-j2nkz

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" in 5.275s (5.275s including waiting). Image size: 520429064 bytes.

openstack

job-controller

neutron-db-create

Completed

Job completed

openstack

job-controller

neutron-828b-account-create-update

Completed

Job completed

openstack

job-controller

cinder-3d67-account-create-update

Completed

Job completed

openstack

job-controller

keystone-db-sync

Completed

Job completed

openstack

job-controller

neutron-db-sync

SuccessfulCreate

Created pod: neutron-db-sync-m7xgd

openstack

job-controller

cinder-b7346-db-sync

SuccessfulCreate

Created pod: cinder-b7346-db-sync-f9mbk

openstack

persistentvolume-controller

glance-glance-bdafd-default-external-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

glance-bdafd-default-external-api

SuccessfulCreate

create Claim glance-glance-bdafd-default-external-api-0 Pod glance-bdafd-default-external-api-0 in StatefulSet glance-bdafd-default-external-api success

openstack

job-controller

ironic-db-create

SuccessfulCreate

Created pod: ironic-db-create-hgms6

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

glance-glance-bdafd-default-external-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-bdafd-default-external-api-0"

openstack

replicaset-controller

dnsmasq-dns-77dd9bf7ff

SuccessfulCreate

Created pod: dnsmasq-dns-77dd9bf7ff-sv6dm

openstack

statefulset-controller

glance-bdafd-default-internal-api

SuccessfulCreate

create Claim glance-glance-bdafd-default-internal-api-0 Pod glance-bdafd-default-internal-api-0 in StatefulSet glance-bdafd-default-internal-api success

openstack

persistentvolume-controller

glance-glance-bdafd-default-internal-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-dcp4q

openstack

metallb-controller

keystone-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

cert-manager-certificaterequests-issuer-acme

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

keystone-internal-svc

Requested

Created new CertificateRequest resource "keystone-internal-svc-1"
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

placement-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

job-controller

placement-db-sync

SuccessfulCreate

Created pod: placement-db-sync-629gt

openstack

replicaset-controller

dnsmasq-dns-564d4966c5

SuccessfulCreate

Created pod: dnsmasq-dns-564d4966c5-82kwv

openstack

job-controller

ironic-b901-account-create-update

SuccessfulCreate

Created pod: ironic-b901-account-create-update-vmptn
(x2)

openstack

persistentvolume-controller

glance-glance-bdafd-default-internal-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
(x2)

openstack

persistentvolume-controller

glance-glance-bdafd-default-external-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

keystone-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

keystone-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

keystone-internal-svc

Generated

Stored new private key in temporary Secret resource "keystone-internal-svc-98hx4"

openstack

replicaset-controller

dnsmasq-dns-77dd9bf7ff

SuccessfulDelete

Deleted pod: dnsmasq-dns-77dd9bf7ff-sv6dm

openstack

cert-manager-certificates-issuing

keystone-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

keystone-public-svc

Requested

Created new CertificateRequest resource "keystone-public-svc-1"

openstack

multus

keystone-bootstrap-dcp4q

AddedInterface

Add eth0 [10.128.0.198/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-db-sync-m7xgd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-approver

keystone-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

ironic-db-create-hgms6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

ironic-db-create-hgms6

AddedInterface

Add eth0 [10.128.0.201/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

neutron-db-sync-m7xgd

AddedInterface

Add eth0 [10.128.0.200/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

keystone-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

keystone-public-svc

Generated

Stored new private key in temporary Secret resource "keystone-public-svc-lbfr6"

openstack

kubelet

keystone-bootstrap-dcp4q

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

cert-manager-certificates-issuing

keystone-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

keystone-bootstrap-dcp4q

Created

Created container: keystone-bootstrap

openstack

kubelet

keystone-bootstrap-dcp4q

Started

Started container keystone-bootstrap

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

glance-glance-bdafd-default-internal-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-bdafd-default-internal-api-0"

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

glance-glance-bdafd-default-external-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf

openstack

multus

cinder-b7346-db-sync-f9mbk

AddedInterface

Add eth0 [10.128.0.202/23] from ovn-kubernetes

openstack

kubelet

cinder-b7346-db-sync-f9mbk

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead"

openstack

cert-manager-certificates-trigger

keystone-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

keystone-public-route

Generated

Stored new private key in temporary Secret resource "keystone-public-route-klkl8"

openstack

cert-manager-certificates-request-manager

keystone-public-route

Requested

Created new CertificateRequest resource "keystone-public-route-1"

openstack

kubelet

dnsmasq-dns-77dd9bf7ff-sv6dm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

multus

dnsmasq-dns-77dd9bf7ff-sv6dm

AddedInterface

Add eth0 [10.128.0.199/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

dnsmasq-dns-564d4966c5-82kwv

AddedInterface

Add eth0 [10.128.0.205/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-77dd9bf7ff-sv6dm

Created

Created container: init

openstack

kubelet

ironic-b901-account-create-update-vmptn

Created

Created container: mariadb-account-create-update

openstack

cert-manager-certificates-request-manager

placement-internal-svc

Requested

Created new CertificateRequest resource "placement-internal-svc-1"

openstack

cert-manager-certificates-key-manager

placement-internal-svc

Generated

Stored new private key in temporary Secret resource "placement-internal-svc-vrt7x"

openstack

cert-manager-certificates-trigger

placement-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

placement-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-db-sync-m7xgd

Started

Started container neutron-db-sync

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-b901-account-create-update-vmptn

Started

Started container mariadb-account-create-update

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

glance-glance-bdafd-default-internal-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7

openstack

multus

ironic-b901-account-create-update-vmptn

AddedInterface

Add eth0 [10.128.0.203/23] from ovn-kubernetes

openstack

kubelet

ironic-db-create-hgms6

Started

Started container mariadb-database-create

openstack

kubelet

neutron-db-sync-m7xgd

Created

Created container: neutron-db-sync

openstack

kubelet

ironic-b901-account-create-update-vmptn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

ironic-db-create-hgms6

Created

Created container: mariadb-database-create

openstack

cert-manager-certificaterequests-issuer-vault

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-564d4966c5-82kwv

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-acme

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

placement-db-sync-629gt

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0"

openstack

cert-manager-certificaterequests-approver

keystone-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

placement-db-sync-629gt

AddedInterface

Add eth0 [10.128.0.204/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-77dd9bf7ff-sv6dm

Started

Started container init

openstack

kubelet

dnsmasq-dns-564d4966c5-82kwv

Created

Created container: init

openstack

kubelet

dnsmasq-dns-564d4966c5-82kwv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-issuing

keystone-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-564d4966c5-82kwv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-issuing

placement-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-trigger

placement-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

placement-public-svc

Requested

Created new CertificateRequest resource "placement-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

placement-public-svc

Generated

Stored new private key in temporary Secret resource "placement-public-svc-8thlk"

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-564d4966c5-82kwv

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-564d4966c5-82kwv

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

placement-public-route

Requested

Created new CertificateRequest resource "placement-public-route-1"

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

placement-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

placement-public-route

Generated

Stored new private key in temporary Secret resource "placement-public-route-4c9g5"

openstack

cert-manager-certificaterequests-approver

placement-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

placement-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

placement-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

placement-db-sync-629gt

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" in 7.333s (7.333s including waiting). Image size: 472994007 bytes.

openstack

multus

glance-bdafd-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.207/23] from ovn-kubernetes

openstack

multus

glance-bdafd-default-internal-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

glance-bdafd-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

job-controller

ironic-b901-account-create-update

Completed

Job completed

openstack

kubelet

placement-db-sync-629gt

Created

Created container: placement-db-sync

openstack

kubelet

placement-db-sync-629gt

Started

Started container placement-db-sync

openstack

job-controller

ironic-db-create

Completed

Job completed

openstack

kubelet

dnsmasq-dns-674c8b7b9c-9fj6z

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-674c8b7b9c

SuccessfulDelete

Deleted pod: dnsmasq-dns-674c8b7b9c-9fj6z

openstack

job-controller

ironic-db-sync

SuccessfulCreate

Created pod: ironic-db-sync-s9d6l

openstack

multus

glance-bdafd-default-external-api-0

AddedInterface

Add eth0 [10.128.0.208/23] from ovn-kubernetes

openstack

multus

glance-bdafd-default-external-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

kubelet

glance-bdafd-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-bdafd-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

glance-bdafd-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

glance-bdafd-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

multus

ironic-db-sync-s9d6l

AddedInterface

Add eth0 [10.128.0.209/23] from ovn-kubernetes

openstack

kubelet

ironic-db-sync-s9d6l

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556"
(x25)

openstack

metallb-speaker

dnsmasq-dns

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

glance-bdafd-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-bdafd-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

glance-bdafd-default-internal-api-0

Started

Started container glance-httpd

openstack

kubelet

glance-bdafd-default-internal-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-bdafd-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-trt9l

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

kubelet

glance-bdafd-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-bdafd-default-external-api-0

Started

Started container glance-httpd

openstack

kubelet

keystone-bootstrap-trt9l

Created

Created container: keystone-bootstrap

openstack

kubelet

keystone-bootstrap-trt9l

Started

Started container keystone-bootstrap

openstack

multus

keystone-bootstrap-trt9l

AddedInterface

Add eth0 [10.128.0.210/23] from ovn-kubernetes

openstack

kubelet

keystone-bootstrap-trt9l

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

job-controller

placement-db-sync

Completed

Job completed

openstack

replicaset-controller

placement-fb464bf7d

SuccessfulCreate

Created pod: placement-fb464bf7d-gv8b6

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-fb464bf7d to 1

openstack

kubelet

cinder-b7346-db-sync-f9mbk

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" in 27.749s (27.749s including waiting). Image size: 1161440551 bytes.

openstack

replicaset-controller

keystone-64cf598f88

SuccessfulCreate

Created pod: keystone-64cf598f88-t2877

openstack

kubelet

ironic-db-sync-s9d6l

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" in 17.002s (17.002s including waiting). Image size: 599312972 bytes.

openstack

kubelet

ironic-db-sync-s9d6l

Created

Created container: init

openstack

kubelet

ironic-db-sync-s9d6l

Started

Started container init

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

kubelet

placement-fb464bf7d-gv8b6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" already present on machine

openstack

deployment-controller

keystone

ScalingReplicaSet

Scaled up replica set keystone-64cf598f88 to 1

openstack

multus

placement-fb464bf7d-gv8b6

AddedInterface

Add eth0 [10.128.0.211/23] from ovn-kubernetes

openstack

kubelet

placement-fb464bf7d-gv8b6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" already present on machine

openstack

kubelet

keystone-64cf598f88-t2877

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

kubelet

ironic-db-sync-s9d6l

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" already present on machine

openstack

kubelet

placement-fb464bf7d-gv8b6

Started

Started container placement-api

openstack

kubelet

placement-fb464bf7d-gv8b6

Created

Created container: placement-api

openstack

kubelet

placement-fb464bf7d-gv8b6

Started

Started container placement-log

openstack

kubelet

placement-fb464bf7d-gv8b6

Created

Created container: placement-log

openstack

kubelet

ironic-db-sync-s9d6l

Created

Created container: ironic-db-sync

openstack

kubelet

ironic-db-sync-s9d6l

Started

Started container ironic-db-sync

openstack

kubelet

cinder-b7346-db-sync-f9mbk

Created

Created container: cinder-b7346-db-sync

openstack

kubelet

cinder-b7346-db-sync-f9mbk

Started

Started container cinder-b7346-db-sync

openstack

kubelet

keystone-64cf598f88-t2877

Started

Started container keystone-api

openstack

kubelet

keystone-64cf598f88-t2877

Created

Created container: keystone-api

openstack

multus

keystone-64cf598f88-t2877

AddedInterface

Add eth0 [10.128.0.212/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

neutron-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

job-controller

neutron-db-sync

Completed

Job completed

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-d477bdc58 to 1

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

neutron-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

neutron-internal-svc

Generated

Stored new private key in temporary Secret resource "neutron-internal-svc-ccmgw"

openstack

cert-manager-certificates-request-manager

neutron-internal-svc

Requested

Created new CertificateRequest resource "neutron-internal-svc-1"

openstack

cert-manager-certificates-issuing

neutron-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

neutron-d477bdc58

SuccessfulCreate

Created pod: neutron-d477bdc58-p8d8s

openstack

replicaset-controller

dnsmasq-dns-84969fcbcc

SuccessfulCreate

Created pod: dnsmasq-dns-84969fcbcc-27cm6

openstack

kubelet

neutron-d477bdc58-p8d8s

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-request-manager

neutron-public-route

Requested

Created new CertificateRequest resource "neutron-public-route-1"

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

neutron-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

neutron-public-svc

Requested

Created new CertificateRequest resource "neutron-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

neutron-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

neutron-public-svc

Generated

Stored new private key in temporary Secret resource "neutron-public-svc-w2k52"

openstack

multus

neutron-d477bdc58-p8d8s

AddedInterface

Add eth0 [10.128.0.214/23] from ovn-kubernetes

openstack

cert-manager-certificates-key-manager

neutron-public-route

Generated

Stored new private key in temporary Secret resource "neutron-public-route-zpwtx"

openstack

multus

neutron-d477bdc58-p8d8s

AddedInterface

Add internalapi [172.17.0.32/24] from openstack/internalapi

openstack

kubelet

neutron-d477bdc58-p8d8s

Created

Created container: neutron-api

openstack

multus

dnsmasq-dns-84969fcbcc-27cm6

AddedInterface

Add eth0 [10.128.0.213/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-84969fcbcc-27cm6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-84969fcbcc-27cm6

Created

Created container: init

openstack

kubelet

dnsmasq-dns-84969fcbcc-27cm6

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-84969fcbcc-27cm6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

neutron-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-d477bdc58-p8d8s

Created

Created container: neutron-httpd

openstack

cert-manager-certificates-issuing

neutron-public-route

Issuing

The certificate has been successfully issued

openstack

job-controller

cinder-b7346-db-sync

Completed

Job completed

openstack

kubelet

dnsmasq-dns-84969fcbcc-27cm6

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-84969fcbcc-27cm6

Created

Created container: dnsmasq-dns

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-564b95b965 to 1

openstack

kubelet

neutron-d477bdc58-p8d8s

Started

Started container neutron-httpd

openstack

replicaset-controller

neutron-564b95b965

SuccessfulCreate

Created pod: neutron-564b95b965-jqq92

openstack

kubelet

neutron-d477bdc58-p8d8s

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

neutron-d477bdc58-p8d8s

Started

Started container neutron-api

openstack

multus

neutron-564b95b965-jqq92

AddedInterface

Add eth0 [10.128.0.215/23] from ovn-kubernetes

openstack

multus

neutron-564b95b965-jqq92

AddedInterface

Add internalapi [172.17.0.33/24] from openstack/internalapi

openstack

replicaset-controller

dnsmasq-dns-66c9d5d889

SuccessfulCreate

Created pod: dnsmasq-dns-66c9d5d889-nmpw7

openstack

metallb-controller

cinder-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

kubelet

neutron-564b95b965-jqq92

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

replicaset-controller

dnsmasq-dns-84969fcbcc

SuccessfulDelete

Deleted pod: dnsmasq-dns-84969fcbcc-27cm6

openstack

kubelet

dnsmasq-dns-84969fcbcc-27cm6

Killing

Stopping container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-564b95b965-jqq92

Started

Started container neutron-httpd

openstack

kubelet

neutron-564b95b965-jqq92

Created

Created container: neutron-httpd

openstack

kubelet

neutron-564b95b965-jqq92

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

neutron-564b95b965-jqq92

Started

Started container neutron-api

openstack

kubelet

neutron-564b95b965-jqq92

Created

Created container: neutron-api

openstack

kubelet

dnsmasq-dns-66c9d5d889-nmpw7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-issuing

cinder-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-internal-svc

Requested

Created new CertificateRequest resource "cinder-internal-svc-1"

openstack

cert-manager-certificates-key-manager

cinder-internal-svc

Generated

Stored new private key in temporary Secret resource "cinder-internal-svc-rxcbl"

openstack

cert-manager-certificates-trigger

cinder-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-66c9d5d889-nmpw7

AddedInterface

Add eth0 [10.128.0.218/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-approver

cinder-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

cinder-b7346-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.217/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

cinder-b7346-scheduler-0

AddedInterface

Add eth0 [10.128.0.216/23] from ovn-kubernetes

openstack

kubelet

cinder-b7346-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4"

openstack

multus

cinder-b7346-backup-0

AddedInterface

Add eth0 [10.128.0.219/23] from ovn-kubernetes

openstack

multus

cinder-b7346-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

cinder-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-b7346-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" in 999ms (999ms including waiting). Image size: 1083291295 bytes.

openstack

kubelet

dnsmasq-dns-66c9d5d889-nmpw7

Started

Started container init

openstack

cert-manager-certificaterequests-approver

cinder-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

dnsmasq-dns-66c9d5d889-nmpw7

Created

Created container: init

openstack

kubelet

dnsmasq-dns-66c9d5d889-nmpw7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d"

openstack

kubelet

cinder-b7346-backup-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7"

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

cinder-b7346-api-0

AddedInterface

Add eth0 [10.128.0.220/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-b7346-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

cinder-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

cinder-public-svc

Requested

Created new CertificateRequest resource "cinder-public-svc-1"

openstack

cert-manager-certificates-key-manager

cinder-public-svc

Generated

Stored new private key in temporary Secret resource "cinder-public-svc-zdbm8"

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

cinder-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

cinder-b7346-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

dnsmasq-dns-66c9d5d889-nmpw7

Started

Started container dnsmasq-dns

openstack

kubelet

cinder-b7346-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" already present on machine

openstack

kubelet

cinder-b7346-api-0

Started

Started container cinder-b7346-api-log

openstack

kubelet

cinder-b7346-backup-0

Started

Started container cinder-backup

openstack

cert-manager-certificates-issuing

cinder-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-66c9d5d889-nmpw7

Created

Created container: dnsmasq-dns

openstack

kubelet

cinder-b7346-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

cert-manager-certificates-request-manager

cinder-public-route

Requested

Created new CertificateRequest resource "cinder-public-route-1"

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" in 993ms (993ms including waiting). Image size: 1084233182 bytes.

openstack

kubelet

cinder-b7346-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" already present on machine

openstack

kubelet

cinder-b7346-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-b7346-backup-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" in 1.03s (1.03s including waiting). Image size: 1083296539 bytes.

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

cinder-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

cinder-b7346-backup-0

Created

Created container: cinder-backup

openstack

cert-manager-certificates-key-manager

cinder-public-route

Generated

Stored new private key in temporary Secret resource "cinder-public-route-rb6jj"

openstack

kubelet

cinder-b7346-api-0

Created

Created container: cinder-b7346-api-log

openstack

kubelet

cinder-b7346-api-0

Created

Created container: cinder-api

openstack

statefulset-controller

cinder-b7346-api

SuccessfulDelete

delete Pod cinder-b7346-api-0 in StatefulSet cinder-b7346-api successful

openstack

kubelet

cinder-b7346-backup-0

Started

Started container probe

openstack

kubelet

cinder-b7346-backup-0

Created

Created container: probe

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-b7346-scheduler-0

Created

Created container: probe

openstack

kubelet

cinder-b7346-scheduler-0

Started

Started container probe

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" already present on machine

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

kubelet

cinder-b7346-api-0

Started

Started container cinder-api

openstack

kubelet

cinder-b7346-api-0

Killing

Stopping container cinder-b7346-api-log

openstack

kubelet

cinder-b7346-api-0

Killing

Stopping container cinder-api
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

replicaset-controller

placement-7d9548858

SuccessfulCreate

Created pod: placement-7d9548858-h45cl

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

statefulset-controller

cinder-b7346-api

SuccessfulCreate

create Pod cinder-b7346-api-0 in StatefulSet cinder-b7346-api successful

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-7d9548858 to 1
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

job-controller

ironic-inspector-6402-account-create-update

SuccessfulCreate

Created pod: ironic-inspector-6402-account-create-update-kj7ts

openstack

metallb-controller

ironic-internal

IPAllocated

Assigned IP ["192.168.122.80"]

openstack

job-controller

ironic-inspector-db-create

SuccessfulCreate

Created pod: ironic-inspector-db-create-pwcj4

openstack

job-controller

ironic-db-sync

Completed

Job completed

openstack

replicaset-controller

dnsmasq-dns-66c9d5d889

SuccessfulDelete

Deleted pod: dnsmasq-dns-66c9d5d889-nmpw7

openstack

replicaset-controller

ironic-neutron-agent-856d98ff5d

SuccessfulCreate

Created pod: ironic-neutron-agent-856d98ff5d-2p7np

openstack

kubelet

dnsmasq-dns-66c9d5d889-nmpw7

Killing

Stopping container dnsmasq-dns

openstack

multus

cinder-b7346-api-0

AddedInterface

Add eth0 [10.128.0.221/23] from ovn-kubernetes

openstack

kubelet

placement-7d9548858-h45cl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" already present on machine

openstack

multus

placement-7d9548858-h45cl

AddedInterface

Add eth0 [10.128.0.222/23] from ovn-kubernetes

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-555fd64789 to 1

openstack

cert-manager-certificates-trigger

ironic-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

deployment-controller

ironic-neutron-agent

ScalingReplicaSet

Scaled up replica set ironic-neutron-agent-856d98ff5d to 1

openstack

replicaset-controller

ironic-555fd64789

SuccessfulCreate

Created pod: ironic-555fd64789-cgpft

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

var-lib-ironic-ironic-conductor-0

Provisioning

External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0"

openstack

kubelet

cinder-b7346-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

replicaset-controller

dnsmasq-dns-7d9d8bd467

SuccessfulCreate

Created pod: dnsmasq-dns-7d9d8bd467-64rvv

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful

openstack

multus

ironic-neutron-agent-856d98ff5d-2p7np

AddedInterface

Add eth0 [10.128.0.225/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7fd9747c7b-h8dsz_36f49b67-8fd8-4a79-b706-a08c5cbc15bf

var-lib-ironic-ironic-conductor-0

ProvisioningSucceeded

Successfully provisioned volume pvc-856664b8-8c8a-4ded-8789-2098a6951852

openstack

kubelet

ironic-neutron-agent-856d98ff5d-2p7np

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:a3f5b519c7fc33e9f66fe553a7bc5cce51c3ff01223190cfa93bb75149a1dfcc"

openstack

cert-manager-certificates-issuing

ironic-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ironic-internal-svc

Requested

Created new CertificateRequest resource "ironic-internal-svc-1"

openstack

cert-manager-certificates-key-manager

ironic-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-internal-svc-j826g"

openstack

multus

ironic-inspector-6402-account-create-update-kj7ts

AddedInterface

Add eth0 [10.128.0.224/23] from ovn-kubernetes

openstack

kubelet

ironic-inspector-6402-account-create-update-kj7ts

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ironic-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

placement-7d9548858-h45cl

Created

Created container: placement-log

openstack

multus

ironic-inspector-db-create-pwcj4

AddedInterface

Add eth0 [10.128.0.223/23] from ovn-kubernetes

openstack

kubelet

ironic-inspector-db-create-pwcj4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

cert-manager-certificates-trigger

ironic-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

placement-7d9548858-h45cl

Started

Started container placement-log

openstack

kubelet

placement-7d9548858-h45cl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" already present on machine

openstack

cert-manager-certificaterequests-issuer-vault

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

placement-7d9548858-h45cl

Created

Created container: placement-api

openstack

kubelet

ironic-inspector-db-create-pwcj4

Started

Started container mariadb-database-create

openstack

kubelet

ironic-inspector-db-create-pwcj4

Created

Created container: mariadb-database-create

openstack

cert-manager-certificates-issuing

ironic-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ironic-public-svc

Requested

Created new CertificateRequest resource "ironic-public-svc-1"

openstack

kubelet

ironic-inspector-6402-account-create-update-kj7ts

Started

Started container mariadb-account-create-update

openstack

kubelet

ironic-inspector-6402-account-create-update-kj7ts

Created

Created container: mariadb-account-create-update

openstack

kubelet

placement-7d9548858-h45cl

Started

Started container placement-api

openstack

kubelet

dnsmasq-dns-7d9d8bd467-64rvv

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7d9d8bd467-64rvv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

multus

dnsmasq-dns-7d9d8bd467-64rvv

AddedInterface

Add eth0 [10.128.0.227/23] from ovn-kubernetes

openstack

kubelet

ironic-555fd64789-cgpft

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f"

openstack

cert-manager-certificates-key-manager

ironic-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-public-svc-z6rtj"

openstack

cert-manager-certificates-trigger

ironic-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

cinder-b7346-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

kubelet

cinder-b7346-api-0

Started

Started container cinder-b7346-api-log

openstack

kubelet

cinder-b7346-api-0

Created

Created container: cinder-b7346-api-log

openstack

cert-manager-certificates-key-manager

ironic-public-route

Generated

Stored new private key in temporary Secret resource "ironic-public-route-57wbz"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

ironic-555fd64789-cgpft

AddedInterface

Add eth0 [10.128.0.226/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-7d9d8bd467-64rvv

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Killing

Stopping container cinder-volume

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Killing

Stopping container probe

openstack

kubelet

cinder-b7346-api-0

Started

Started container cinder-api

openstack

statefulset-controller

cinder-b7346-volume-lvm-iscsi

SuccessfulDelete

delete Pod cinder-b7346-volume-lvm-iscsi-0 in StatefulSet cinder-b7346-volume-lvm-iscsi successful

openstack

kubelet

cinder-b7346-backup-0

Killing

Stopping container cinder-backup

openstack

kubelet

cinder-b7346-scheduler-0

Killing

Stopping container probe

openstack

kubelet

cinder-b7346-scheduler-0

Killing

Stopping container cinder-scheduler

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-b7346-backup-0

Killing

Stopping container probe

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

cinder-b7346-scheduler

SuccessfulDelete

delete Pod cinder-b7346-scheduler-0 in StatefulSet cinder-b7346-scheduler successful

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7d9d8bd467-64rvv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

replicaset-controller

ironic-6cc9f57487

SuccessfulCreate

Created pod: ironic-6cc9f57487-vklxq

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

cinder-b7346-api-0

Created

Created container: cinder-api

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-6cc9f57487 to 1

openstack

cert-manager-certificates-issuing

ironic-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ironic-public-route

Requested

Created new CertificateRequest resource "ironic-public-route-1"

openstack

statefulset-controller

cinder-b7346-backup

SuccessfulDelete

delete Pod cinder-b7346-backup-0 in StatefulSet cinder-b7346-backup successful

openstack

kubelet

ironic-neutron-agent-856d98ff5d-2p7np

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:a3f5b519c7fc33e9f66fe553a7bc5cce51c3ff01223190cfa93bb75149a1dfcc" in 3.948s (3.948s including waiting). Image size: 655390550 bytes.

openstack

kubelet

ironic-555fd64789-cgpft

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" in 3.671s (3.671s including waiting). Image size: 536433442 bytes.

openstack

multus

ironic-conductor-0

AddedInterface

Add ironic [172.20.1.31/24] from openstack/ironic

openstack

multus

ironic-6cc9f57487-vklxq

AddedInterface

Add eth0 [10.128.0.229/23] from ovn-kubernetes

openstack

multus

ironic-conductor-0

AddedInterface

Add eth0 [10.128.0.228/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7d9d8bd467-64rvv

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7d9d8bd467-64rvv

Started

Started container dnsmasq-dns

openstack

kubelet

ironic-555fd64789-cgpft

Started

Started container init

openstack

kubelet

ironic-555fd64789-cgpft

Created

Created container: init

openstack

kubelet

ironic-6cc9f57487-vklxq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

job-controller

ironic-inspector-6402-account-create-update

Completed

Job completed
(x2)

openstack

statefulset-controller

cinder-b7346-volume-lvm-iscsi

SuccessfulCreate

create Pod cinder-b7346-volume-lvm-iscsi-0 in StatefulSet cinder-b7346-volume-lvm-iscsi successful

openstack

job-controller

ironic-inspector-db-create

Completed

Job completed
(x2)

openstack

statefulset-controller

cinder-b7346-backup

SuccessfulCreate

create Pod cinder-b7346-backup-0 in StatefulSet cinder-b7346-backup successful
(x2)

openstack

statefulset-controller

cinder-b7346-scheduler

SuccessfulCreate

create Pod cinder-b7346-scheduler-0 in StatefulSet cinder-b7346-scheduler successful

openstack

multus

cinder-b7346-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.230/23] from ovn-kubernetes

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" already present on machine

openstack

kubelet

ironic-555fd64789-cgpft

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" already present on machine

openstack

kubelet

ironic-conductor-0

Created

Created container: init

openstack

kubelet

ironic-6cc9f57487-vklxq

Started

Started container init

openstack

kubelet

ironic-6cc9f57487-vklxq

Created

Created container: init

openstack

kubelet

ironic-conductor-0

Started

Started container init

openstack

kubelet

cinder-b7346-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" already present on machine

openstack

kubelet

ironic-555fd64789-cgpft

Started

Started container ironic-api-log

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

kubelet

ironic-555fd64789-cgpft

Created

Created container: ironic-api-log

openstack

kubelet

cinder-b7346-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" already present on machine

openstack

multus

cinder-b7346-scheduler-0

AddedInterface

Add eth0 [10.128.0.232/23] from ovn-kubernetes

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

cinder-b7346-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" already present on machine

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-b7346-backup-0

Created

Created container: cinder-backup

openstack

kubelet

cinder-b7346-backup-0

Started

Started container cinder-backup

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

multus

cinder-b7346-backup-0

AddedInterface

Add eth0 [10.128.0.231/23] from ovn-kubernetes

openstack

kubelet

cinder-b7346-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" already present on machine

openstack

multus

cinder-b7346-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

kubelet

cinder-b7346-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" already present on machine

openstack

kubelet

ironic-6cc9f57487-vklxq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

kubelet

ironic-6cc9f57487-vklxq

Created

Created container: ironic-api-log

openstack

kubelet

ironic-6cc9f57487-vklxq

Started

Started container ironic-api-log

openstack

kubelet

ironic-6cc9f57487-vklxq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

kubelet

ironic-6cc9f57487-vklxq

Created

Created container: ironic-api

openstack

kubelet

ironic-6cc9f57487-vklxq

Started

Started container ironic-api

openstack

kubelet

cinder-b7346-backup-0

Created

Created container: probe

openstack

kubelet

cinder-b7346-backup-0

Started

Started container probe

openstack

kubelet

cinder-b7346-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-b7346-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

cinder-b7346-scheduler-0

Started

Started container probe
(x2)

openstack

kubelet

ironic-555fd64789-cgpft

Created

Created container: ironic-api
(x2)

openstack

kubelet

ironic-555fd64789-cgpft

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

kubelet

cinder-b7346-scheduler-0

Created

Created container: probe

openstack

metallb-speaker

keystone-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x2)

openstack

kubelet

ironic-555fd64789-cgpft

Started

Started container ironic-api

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6"

openstack

metallb-speaker

cinder-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x2)

openstack

kubelet

ironic-555fd64789-cgpft

BackOff

Back-off restarting failed container ironic-api in pod ironic-555fd64789-cgpft_openstack(700c3143-d1a3-47a3-92f5-02a0b1e428a4)

openstack

kubelet

openstackclient

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-ncm67" : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:master-0" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'master-0' and this object

openstack

kubelet

openstackclient

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:f446a1c7e6aed77f28fca3c632fb8d356e361e784dc15d5dc1e235886ab536bd"

openstack

multus

openstackclient

AddedInterface

Add eth0 [10.128.0.234/23] from ovn-kubernetes

openstack

replicaset-controller

ironic-555fd64789

SuccessfulDelete

Deleted pod: ironic-555fd64789-cgpft

openstack

kubelet

ironic-555fd64789-cgpft

Killing

Stopping container ironic-api-log

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled down replica set ironic-555fd64789 to 0 from 1
(x2)

openstack

kubelet

ironic-neutron-agent-856d98ff5d-2p7np

BackOff

Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-856d98ff5d-2p7np_openstack(40a5b237-764f-4367-85a5-4153a8f90a3e)

openstack

job-controller

ironic-inspector-db-sync

SuccessfulCreate

Created pod: ironic-inspector-db-sync-pd272
(x3)

openstack

metallb-speaker

ironic-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

replicaset-controller

swift-proxy-8695dc84b

SuccessfulCreate

Created pod: swift-proxy-8695dc84b-bccck

openstack

multus

ironic-inspector-db-sync-pd272

AddedInterface

Add eth0 [10.128.0.235/23] from ovn-kubernetes

openstack

deployment-controller

swift-proxy

ScalingReplicaSet

Scaled up replica set swift-proxy-8695dc84b to 1

openstack

replicaset-controller

dnsmasq-dns-564d4966c5

SuccessfulDelete

Deleted pod: dnsmasq-dns-564d4966c5-82kwv

openstack

kubelet

dnsmasq-dns-564d4966c5-82kwv

Killing

Stopping container dnsmasq-dns
(x2)

openstack

statefulset-controller

glance-bdafd-default-external-api

SuccessfulDelete

delete Pod glance-bdafd-default-external-api-0 in StatefulSet glance-bdafd-default-external-api successful

openstack

kubelet

glance-bdafd-default-external-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-bdafd-default-external-api-0

Killing

Stopping container glance-log

openstack

kubelet

ironic-inspector-db-sync-pd272

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38"

openstack

kubelet

swift-proxy-8695dc84b-bccck

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123" already present on machine
(x2)

openstack

statefulset-controller

glance-bdafd-default-internal-api

SuccessfulDelete

delete Pod glance-bdafd-default-internal-api-0 in StatefulSet glance-bdafd-default-internal-api successful

openstack

kubelet

glance-bdafd-default-internal-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-bdafd-default-internal-api-0

Killing

Stopping container glance-log

openstack

multus

swift-proxy-8695dc84b-bccck

AddedInterface

Add eth0 [10.128.0.236/23] from ovn-kubernetes

openstack

kubelet

swift-proxy-8695dc84b-bccck

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123" already present on machine

openstack

kubelet

swift-proxy-8695dc84b-bccck

Started

Started container proxy-httpd

openstack

kubelet

swift-proxy-8695dc84b-bccck

Created

Created container: proxy-httpd

openstack

kubelet

swift-proxy-8695dc84b-bccck

Created

Created container: proxy-server

openstack

kubelet

swift-proxy-8695dc84b-bccck

Started

Started container proxy-server

openstack

kubelet

glance-bdafd-default-external-api-0

Unhealthy

Readiness probe failed: Get "https://10.128.0.208:9292/healthcheck": read tcp 10.128.0.2:41682->10.128.0.208:9292: read: connection reset by peer

openstack

kubelet

glance-bdafd-default-external-api-0

Unhealthy

Readiness probe failed: Get "https://10.128.0.208:9292/healthcheck": read tcp 10.128.0.2:41684->10.128.0.208:9292: read: connection reset by peer

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled down replica set neutron-d477bdc58 to 0 from 1

openstack

kubelet

neutron-d477bdc58-p8d8s

Killing

Stopping container neutron-httpd

openstack

kubelet

ironic-inspector-db-sync-pd272

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" in 3.428s (3.428s including waiting). Image size: 539826777 bytes.

openstack

kubelet

ironic-inspector-db-sync-pd272

Created

Created container: ironic-inspector-db-sync

openstack

replicaset-controller

neutron-d477bdc58

SuccessfulDelete

Deleted pod: neutron-d477bdc58-p8d8s

openstack

kubelet

ironic-inspector-db-sync-pd272

Started

Started container ironic-inspector-db-sync

openstack

kubelet

neutron-d477bdc58-p8d8s

Killing

Stopping container neutron-api

openstack

metallb-speaker

swift-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x2)

openstack

kubelet

ironic-neutron-agent-856d98ff5d-2p7np

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:a3f5b519c7fc33e9f66fe553a7bc5cce51c3ff01223190cfa93bb75149a1dfcc" already present on machine

openstack

kubelet

placement-fb464bf7d-gv8b6

Killing

Stopping container placement-log
(x3)

openstack

statefulset-controller

glance-bdafd-default-external-api

SuccessfulCreate

create Pod glance-bdafd-default-external-api-0 in StatefulSet glance-bdafd-default-external-api successful

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled down replica set placement-fb464bf7d to 0 from 1

openstack

replicaset-controller

placement-fb464bf7d

SuccessfulDelete

Deleted pod: placement-fb464bf7d-gv8b6

openstack

kubelet

placement-fb464bf7d-gv8b6

Killing

Stopping container placement-api

openstack

job-controller

ironic-inspector-db-sync

Completed

Job completed

openstack

job-controller

nova-cell0-db-create

SuccessfulCreate

Created pod: nova-cell0-db-create-kzhmb

openstack

job-controller

nova-api-db-create

SuccessfulCreate

Created pod: nova-api-db-create-qrtq2

openstack

job-controller

nova-cell0-8a9d-account-create-update

SuccessfulCreate

Created pod: nova-cell0-8a9d-account-create-update-hxq4n

openstack

job-controller

nova-cell1-db-create

SuccessfulCreate

Created pod: nova-cell1-db-create-4kz4t

openstack

job-controller

nova-api-e077-account-create-update

SuccessfulCreate

Created pod: nova-api-e077-account-create-update-fnxnr

openstack

job-controller

nova-cell1-c618-account-create-update

SuccessfulCreate

Created pod: nova-cell1-c618-account-create-update-mmq8h
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

metallb-controller

ironic-inspector-internal

IPAllocated

Assigned IP ["192.168.122.80"]
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

replicaset-controller

dnsmasq-dns-55b78786dc

SuccessfulCreate

Created pod: dnsmasq-dns-55b78786dc-sn557

openstack

cert-manager-certificaterequests-approver

ironic-inspector-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ironic-inspector-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ironic-inspector-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

ironic-inspector-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ironic-inspector-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-hl7m5"

openstack

cert-manager-certificates-request-manager

ironic-inspector-internal-svc

Requested

Created new CertificateRequest resource "ironic-inspector-internal-svc-1"

openstack

cert-manager-certificates-issuing

ironic-inspector-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-svc

Requested

Created new CertificateRequest resource "ironic-inspector-public-svc-1"

openstack

cert-manager-certificates-trigger

ironic-inspector-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-svc-p7xr6"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-route

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-route-h2mqq"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-route

Requested

Created new CertificateRequest resource "ironic-inspector-public-route-1"

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

openstackclient

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:f446a1c7e6aed77f28fca3c632fb8d356e361e784dc15d5dc1e235886ab536bd" in 24.113s (24.113s including waiting). Image size: 594534254 bytes.

openstack

cert-manager-certificates-issuing

ironic-inspector-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

openstackclient

Created

Created container: openstackclient

openstack

kubelet

openstackclient

Started

Started container openstackclient

openstack

multus

nova-api-db-create-qrtq2

AddedInterface

Add eth0 [10.128.0.238/23] from ovn-kubernetes
(x5)

openstack

metallb-speaker

placement-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

multus

nova-cell1-db-create-4kz4t

AddedInterface

Add eth0 [10.128.0.241/23] from ovn-kubernetes
(x3)

openstack

kubelet

ironic-neutron-agent-856d98ff5d-2p7np

Started

Started container ironic-neutron-agent
(x4)

openstack

metallb-speaker

neutron-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

kubelet

ironic-neutron-agent-856d98ff5d-2p7np

Created

Created container: ironic-neutron-agent

openstack

multus

nova-cell0-8a9d-account-create-update-hxq4n

AddedInterface

Add eth0 [10.128.0.242/23] from ovn-kubernetes

openstack

multus

nova-cell1-c618-account-create-update-mmq8h

AddedInterface

Add eth0 [10.128.0.243/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-db-create-kzhmb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

glance-bdafd-default-external-api-0

AddedInterface

Add eth0 [10.128.0.237/23] from ovn-kubernetes

openstack

kubelet

nova-api-db-create-qrtq2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

dnsmasq-dns-55b78786dc-sn557

AddedInterface

Add eth0 [10.128.0.244/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

multus

nova-api-e077-account-create-update-fnxnr

AddedInterface

Add eth0 [10.128.0.240/23] from ovn-kubernetes

openstack

kubelet

nova-api-e077-account-create-update-fnxnr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

nova-cell1-db-create-4kz4t

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

nova-cell0-8a9d-account-create-update-hxq4n

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.245/23] from ovn-kubernetes

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6" in 27.951s (27.951s including waiting). Image size: 786789676 bytes.

openstack

kubelet

nova-cell1-c618-account-create-update-mmq8h

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

nova-cell0-db-create-kzhmb

AddedInterface

Add eth0 [10.128.0.239/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-8a9d-account-create-update-hxq4n

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-api-e077-account-create-update-fnxnr

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-cell0-db-create-kzhmb

Started

Started container mariadb-database-create

openstack

kubelet

nova-cell1-c618-account-create-update-mmq8h

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-cell1-c618-account-create-update-mmq8h

Started

Started container mariadb-account-create-update

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

nova-cell0-8a9d-account-create-update-hxq4n

Created

Created container: mariadb-account-create-update

openstack

statefulset-controller

ironic-inspector

SuccessfulDelete

delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Started

Started container init

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Created

Created container: init

openstack

kubelet

nova-cell0-db-create-kzhmb

Created

Created container: mariadb-database-create

openstack

kubelet

nova-api-e077-account-create-update-fnxnr

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-cell1-db-create-4kz4t

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell1-db-create-4kz4t

Started

Started container mariadb-database-create

openstack

kubelet

nova-api-db-create-qrtq2

Started

Started container mariadb-database-create

openstack

kubelet

nova-api-db-create-qrtq2

Created

Created container: mariadb-database-create

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-python-agent-init

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-python-agent-init

openstack

multus

glance-bdafd-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

glance-bdafd-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine
(x3)

openstack

statefulset-controller

glance-bdafd-default-internal-api

SuccessfulCreate

create Pod glance-bdafd-default-internal-api-0 in StatefulSet glance-bdafd-default-internal-api successful

openstack

kubelet

glance-bdafd-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-bdafd-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6" already present on machine

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Created

Created container: dnsmasq-dns

openstack

kubelet

glance-bdafd-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-bdafd-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

multus

glance-bdafd-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

kubelet

glance-bdafd-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-bdafd-default-external-api-0

Started

Started container glance-httpd

openstack

multus

glance-bdafd-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.246/23] from ovn-kubernetes

openstack

kubelet

glance-bdafd-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-bdafd-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

glance-bdafd-default-internal-api-0

Created

Created container: glance-log
(x2)

openstack

statefulset-controller

ironic-inspector

SuccessfulCreate

create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

job-controller

nova-api-db-create

Completed

Job completed

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

glance-bdafd-default-internal-api-0

Started

Started container glance-httpd

openstack

kubelet

glance-bdafd-default-internal-api-0

Created

Created container: glance-httpd

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.247/23] from ovn-kubernetes

openstack

job-controller

nova-cell1-db-create

Completed

Job completed

openstack

job-controller

nova-api-e077-account-create-update

Completed

Job completed

openstack

job-controller

nova-cell0-8a9d-account-create-update

Completed

Job completed

openstack

job-controller

nova-cell1-c618-account-create-update

Completed

Job completed

openstack

kubelet

ironic-inspector-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32"

openstack

job-controller

nova-cell0-db-create

Completed

Job completed

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32"

openstack

job-controller

nova-cell0-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell0-conductor-db-sync-ph4c9

openstack

kubelet

ironic-inspector-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" in 3.989s (3.989s including waiting). Image size: 657316612 bytes.

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" in 2.973s (2.973s including waiting). Image size: 657316612 bytes.

openstack

replicaset-controller

dnsmasq-dns-7d9d8bd467

SuccessfulDelete

Deleted pod: dnsmasq-dns-7d9d8bd467-64rvv

openstack

kubelet

dnsmasq-dns-7d9d8bd467-64rvv

Killing

Stopping container dnsmasq-dns

openstack

multus

nova-cell0-conductor-db-sync-ph4c9

AddedInterface

Add eth0 [10.128.0.248/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-conductor-db-sync-ph4c9

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea"

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-conductor-0

Started

Started container pxe-init

openstack

kubelet

ironic-conductor-0

Created

Created container: pxe-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" already present on machine

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-httpboot

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-httpboot

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ramdisk-logs

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container ramdisk-logs

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-dnsmasq

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-dnsmasq
(x3)

openstack

metallb-speaker

glance-default-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

nova-cell0-conductor-db-sync-ph4c9

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" in 10.609s (10.609s including waiting). Image size: 668212205 bytes.

openstack

kubelet

nova-cell0-conductor-db-sync-ph4c9

Created

Created container: nova-cell0-conductor-db-sync

openstack

kubelet

nova-cell0-conductor-db-sync-ph4c9

Started

Started container nova-cell0-conductor-db-sync

openstack

metallb-speaker

ironic-inspector-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

statefulset-controller

nova-cell0-conductor

SuccessfulCreate

create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful

openstack

job-controller

nova-cell0-conductor-db-sync

Completed

Job completed

openstack

kubelet

nova-cell0-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

kubelet

nova-cell0-conductor-0

Created

Created container: nova-cell0-conductor-conductor

openstack

kubelet

nova-cell0-conductor-0

Started

Started container nova-cell0-conductor-conductor

openstack

multus

nova-cell0-conductor-0

AddedInterface

Add eth0 [10.128.0.249/23] from ovn-kubernetes

openstack

job-controller

nova-cell0-cell-mapping

SuccessfulCreate

Created pod: nova-cell0-cell-mapping-fck78

openstack

statefulset-controller

nova-cell1-compute-ironic-compute

SuccessfulCreate

create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful

openstack

cert-manager-certificates-request-manager

nova-metadata-internal-svc

Requested

Created new CertificateRequest resource "nova-metadata-internal-svc-1"
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-metadata-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

nova-metadata-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

nova-metadata-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-metadata-internal-svc-px7qj"

openstack

replicaset-controller

dnsmasq-dns-6fcf8f9d6f

SuccessfulCreate

Created pod: dnsmasq-dns-6fcf8f9d6f-578q8

openstack

cert-manager-certificates-issuing

nova-metadata-internal-svc

Issuing

The certificate has been successfully issued

openstack

metallb-controller

nova-metadata-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

kubelet

nova-cell0-cell-mapping-fck78

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

multus

nova-cell1-compute-ironic-compute-0

AddedInterface

Add eth0 [10.128.0.251/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d"

openstack

cert-manager-certificaterequests-issuer-venafi

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-cell0-cell-mapping-fck78

AddedInterface

Add eth0 [10.128.0.250/23] from ovn-kubernetes

openstack

job-controller

nova-cell1-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell1-conductor-db-sync-lc7xf

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-svc

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-pbhdf"

openstack

kubelet

nova-cell0-cell-mapping-fck78

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-novncproxy-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:bd2b75f4a9e51369f7c6352ddcf6520afb1f3ea8795a683466b6802da3c26f77"

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.0/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617"

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.0.254/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-6fcf8f9d6f-578q8

AddedInterface

Add eth0 [10.128.0.255/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6fcf8f9d6f-578q8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-svc

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.0.253/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657"

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.0.252/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-api-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657"

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

nova-cell0-cell-mapping-fck78

Started

Started container nova-manage

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6fcf8f9d6f-578q8

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-route

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-n77lw"

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-route

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1"

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

nova-cell1-conductor-db-sync-lc7xf

Started

Started container nova-cell1-conductor-db-sync

openstack

kubelet

nova-cell1-conductor-db-sync-lc7xf

Created

Created container: nova-cell1-conductor-db-sync

openstack

kubelet

nova-cell1-conductor-db-sync-lc7xf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-cell1-conductor-db-sync-lc7xf

AddedInterface

Add eth0 [10.128.1.1/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-vencrypt

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6fcf8f9d6f-578q8

Created

Created container: init

openstack

kubelet

dnsmasq-dns-6fcf8f9d6f-578q8

Started

Started container init

openstack

kubelet

dnsmasq-dns-6fcf8f9d6f-578q8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-6fcf8f9d6f-578q8

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-vencrypt

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-vencrypt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-vencrypt

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-zzwvb"

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-vencrypt

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1"

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulDelete

delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

kubelet

nova-metadata-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" in 3.918s (3.918s including waiting). Image size: 685015783 bytes.

openstack

kubelet

nova-api-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" in 4.163s (4.163s including waiting). Image size: 685015783 bytes.

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617" in 3.929s (3.929s including waiting). Image size: 668216812 bytes.

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:bd2b75f4a9e51369f7c6352ddcf6520afb1f3ea8795a683466b6802da3c26f77" in 3.736s (3.736s including waiting). Image size: 670576628 bytes.

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-cell1-novncproxy-0

Killing

Stopping container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

ironic-conductor-0

Created

Created container: httpboot

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-conductor

openstack

kubelet

ironic-conductor-0

Started

Started container httpboot

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" already present on machine

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" already present on machine

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-conductor

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.2/23] from ovn-kubernetes

openstack

kubelet

ironic-conductor-0

Created

Created container: dnsmasq

openstack

kubelet

ironic-conductor-0

Started

Started container dnsmasq

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-55b78786dc

SuccessfulDelete

Deleted pod: dnsmasq-dns-55b78786dc-sn557

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.0.252:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.0.252:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d" in 15.183s (15.183s including waiting). Image size: 1216089983 bytes.

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Created

Created container: nova-cell1-compute-ironic-compute-compute

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Started

Started container nova-cell1-compute-ironic-compute-compute
(x2)

openstack

kubelet

dnsmasq-dns-55b78786dc-sn557

Unhealthy

Readiness probe failed: dial tcp 10.128.0.244:5353: connect: connection refused

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

job-controller

nova-cell0-cell-mapping

Completed

Job completed

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

job-controller

nova-cell1-conductor-db-sync

Completed

Job completed

openstack

statefulset-controller

nova-cell1-conductor

SuccessfulCreate

create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful

openstack

multus

nova-cell1-conductor-0

AddedInterface

Add eth0 [10.128.1.3/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-conductor-0

Created

Created container: nova-cell1-conductor-conductor

openstack

kubelet

nova-cell1-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

kubelet

nova-cell1-conductor-0

Started

Started container nova-cell1-conductor-conductor

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.4/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.5/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.6/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617" already present on machine

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.4:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.4:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.5:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.5:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulCreate

create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:bd2b75f4a9e51369f7c6352ddcf6520afb1f3ea8795a683466b6802da3c26f77" already present on machine

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.7/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

replicaset-controller

dnsmasq-dns-555687858c

SuccessfulCreate

Created pod: dnsmasq-dns-555687858c-l6w59
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

default

endpoint-controller

nova-internal

FailedToCreateEndpoint

Failed to create endpoint for service openstack/nova-internal: endpoints "nova-internal" already exists

openstack

metallb-controller

nova-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-trigger

nova-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

multus

dnsmasq-dns-555687858c-l6w59

AddedInterface

Add eth0 [10.128.1.8/23] from ovn-kubernetes

openstack

cert-manager-certificates-key-manager

nova-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-internal-svc-6g58z"

openstack

kubelet

dnsmasq-dns-555687858c-l6w59

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-555687858c-l6w59

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

nova-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

nova-public-svc

Generated

Stored new private key in temporary Secret resource "nova-public-svc-42qjk"

openstack

cert-manager-certificates-request-manager

nova-public-svc

Requested

Created new CertificateRequest resource "nova-public-svc-1"

openstack

cert-manager-certificates-issuing

nova-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

nova-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

nova-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-555687858c-l6w59

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-555687858c-l6w59

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-555687858c-l6w59

Started

Started container init

openstack

kubelet

dnsmasq-dns-555687858c-l6w59

Created

Created container: init

openstack

cert-manager-certificates-request-manager

nova-internal-svc

Requested

Created new CertificateRequest resource "nova-internal-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

nova-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-public-route

Requested

Created new CertificateRequest resource "nova-public-route-1"

openstack

cert-manager-certificates-key-manager

nova-public-route

Generated

Stored new private key in temporary Secret resource "nova-public-route-z586v"

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

multus

nova-cell1-host-discover-4lbdf

AddedInterface

Add eth0 [10.128.1.10/23] from ovn-kubernetes

openstack

multus

nova-cell1-cell-mapping-l5vzg

AddedInterface

Add eth0 [10.128.1.9/23] from ovn-kubernetes

openstack

job-controller

nova-cell1-host-discover

SuccessfulCreate

Created pod: nova-cell1-host-discover-4lbdf

openstack

job-controller

nova-cell1-cell-mapping

SuccessfulCreate

Created pod: nova-cell1-cell-mapping-l5vzg

openstack

kubelet

nova-cell1-host-discover-4lbdf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

kubelet

nova-cell1-host-discover-4lbdf

Started

Started container nova-manage

openstack

kubelet

nova-cell1-host-discover-4lbdf

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-cell-mapping-l5vzg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

kubelet

nova-cell1-cell-mapping-l5vzg

Started

Started container nova-manage

openstack

kubelet

nova-cell1-cell-mapping-l5vzg

Created

Created container: nova-manage

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.11/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log
(x24)

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

(combined from similar events): Scaled down replica set dnsmasq-dns-6fcf8f9d6f to 0 from 1

openstack

replicaset-controller

dnsmasq-dns-6fcf8f9d6f

SuccessfulDelete

Deleted pod: dnsmasq-dns-6fcf8f9d6f-578q8

openstack

kubelet

dnsmasq-dns-6fcf8f9d6f-578q8

Killing

Stopping container dnsmasq-dns

openstack

job-controller

nova-cell1-host-discover

Completed

Job completed
(x3)

openstack

statefulset-controller

nova-metadata

SuccessfulDelete

delete Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

job-controller

nova-cell1-cell-mapping

Completed

Job completed

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata
(x3)

openstack

statefulset-controller

nova-api

SuccessfulDelete

delete Pod nova-api-0 in StatefulSet nova-api successful

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log
(x2)

openstack

statefulset-controller

nova-scheduler

SuccessfulDelete

delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1
(x4)

openstack

statefulset-controller

nova-api

SuccessfulCreate

create Pod nova-api-0 in StatefulSet nova-api successful

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.12/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.4:8775/": read tcp 10.128.0.2:53838->10.128.1.4:8775: read: connection reset by peer

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.4:8775/": read tcp 10.128.0.2:53832->10.128.1.4:8775: read: connection reset by peer

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine
(x4)

openstack

statefulset-controller

nova-metadata

SuccessfulCreate

create Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.13/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata
(x3)

openstack

statefulset-controller

nova-scheduler

SuccessfulCreate

create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.14/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617" already present on machine

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.12:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.12:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.13:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.13:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x3)

openstack

metallb-speaker

nova-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

metallb-speaker

nova-metadata-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x11)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-nodes of Type *v1.Service
(x11)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-nodes of Type *v1.Service

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-rjgth

Killing

Stopping container sushy-emulator

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulDelete

Deleted pod: sushy-emulator-78f6d7d749-rjgth

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled down replica set sushy-emulator-78f6d7d749 to 0 from 1

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Unhealthy

Readiness probe failed: Get "http://10.128.0.157:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

sushy-emulator

replicaset-controller

sushy-emulator-84965d5d88

SuccessfulCreate

Created pod: sushy-emulator-84965d5d88-5549q

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-96xg2

Unhealthy

Readiness probe failed: Get "http://10.128.0.157:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-84965d5d88 to 1

sushy-emulator

multus

sushy-emulator-84965d5d88-5549q

AddedInterface

Add ironic [172.20.1.71/24] from sushy-emulator/ironic

sushy-emulator

kubelet

sushy-emulator-84965d5d88-5549q

Pulled

Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" already present on machine

sushy-emulator

multus

sushy-emulator-84965d5d88-5549q

AddedInterface

Add eth0 [10.128.1.15/23] from ovn-kubernetes

sushy-emulator

kubelet

sushy-emulator-84965d5d88-5549q

Created

Created container: sushy-emulator

sushy-emulator

kubelet

sushy-emulator-84965d5d88-5549q

Started

Started container sushy-emulator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531880

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531880

SuccessfulCreate

Created pod: collect-profiles-29531880-xpxmc

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531880-xpxmc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29531880-xpxmc

AddedInterface

Add eth0 [10.128.1.16/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531880-xpxmc

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531880-xpxmc

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531880, condition: Complete

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29531835

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531880

Completed

Job completed

openstack

job-controller

keystone-cron-29531881

SuccessfulCreate

Created pod: keystone-cron-29531881-w8jl5

openstack

cronjob-controller

keystone-cron

SuccessfulCreate

Created job keystone-cron-29531881

openstack

kubelet

keystone-cron-29531881-w8jl5

Created

Created container: keystone-cron

openstack

kubelet

keystone-cron-29531881-w8jl5

Started

Started container keystone-cron

openstack

multus

keystone-cron-29531881-w8jl5

AddedInterface

Add eth0 [10.128.1.17/23] from ovn-kubernetes

openstack

kubelet

keystone-cron-29531881-w8jl5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

job-controller

keystone-cron-29531881

Completed

Job completed

openstack

cronjob-controller

keystone-cron

SawCompletedJob

Saw completed job: keystone-cron-29531881, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531895

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531895

SuccessfulCreate

Created pod: collect-profiles-29531895-57vmb

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531895-57vmb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29531895-57vmb

AddedInterface

Add eth0 [10.128.1.18/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531895-57vmb

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531895-57vmb

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531895

Completed

Job completed
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531895, condition: Complete

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29531850

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

kubelet

swift-proxy-8695dc84b-bccck

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 502

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531910

SuccessfulCreate

Created pod: collect-profiles-29531910-4pps5

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531910

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531910-4pps5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531910-4pps5

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531910-4pps5

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29531910-4pps5

AddedInterface

Add eth0 [10.128.1.19/23] from ovn-kubernetes
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531910, condition: Complete

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29531865

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531910

Completed

Job completed

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-khpqm namespace