Time Namespace Component RelatedObject Reason Message

openshift-marketplace

community-operators-kkwwl

Scheduled

Successfully assigned openshift-marketplace/community-operators-kkwwl to master-0

openstack

cinder-6ac23-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-6ac23-volume-lvm-iscsi-0 to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8 to master-0

openstack

root-account-create-update-pxfms

Scheduled

Successfully assigned openstack/root-account-create-update-pxfms to master-0

openshift-monitoring

openshift-state-metrics-6dbff8cb4c-swtr6

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6 to master-0

openshift-monitoring

node-exporter-2qn8m

Scheduled

Successfully assigned openshift-monitoring/node-exporter-2qn8m to master-0

openshift-monitoring

monitoring-plugin-5d9ddb8754-xtrdd

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd to master-0

openshift-monitoring

metrics-server-7b9cc5984b-smpdl

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7b9cc5984b-smpdl to master-0

openshift-monitoring

metrics-server-67ddc7b799-zlnvf

Scheduled

Successfully assigned openshift-monitoring/metrics-server-67ddc7b799-zlnvf to master-0

openshift-monitoring

kube-state-metrics-59584d565f-f6f26

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-59584d565f-f6f26 to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openstack

ironic-db-create-8l585

Scheduled

Successfully assigned openstack/ironic-db-create-8l585 to master-0

openstack

ironic-85b75c94bc-pp6mc

Scheduled

Successfully assigned openstack/ironic-85b75c94bc-pp6mc to master-0

openstack

ironic-db-sync-jzr8b

Scheduled

Successfully assigned openstack/ironic-db-sync-jzr8b to master-0

openstack

ironic-ecce-account-create-update-2pvjj

Scheduled

Successfully assigned openstack/ironic-ecce-account-create-update-2pvjj to master-0

cert-manager

cert-manager-cainjector-5545bd876-hhm6l

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-hhm6l to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

ironic-inspector-2bdc-account-create-update-5cgdd

Scheduled

Successfully assigned openstack/ironic-inspector-2bdc-account-create-update-5cgdd to master-0

openstack

ironic-inspector-db-create-8kz9s

Scheduled

Successfully assigned openstack/ironic-inspector-db-create-8kz9s to master-0

openstack

ironic-inspector-db-sync-8hw9n

Scheduled

Successfully assigned openstack/ironic-inspector-db-sync-8hw9n to master-0

openstack

ironic-neutron-agent-7d8f6784f6-dqjdm

Scheduled

Successfully assigned openstack/ironic-neutron-agent-7d8f6784f6-dqjdm to master-0

openstack

keystone-5d23-account-create-update-q2xlr

Scheduled

Successfully assigned openstack/keystone-5d23-account-create-update-q2xlr to master-0

cert-manager

cert-manager-webhook-6888856db4-j4m97

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-j4m97 to master-0

openstack

keystone-8f98fb65f-btxw6

Scheduled

Successfully assigned openstack/keystone-8f98fb65f-btxw6 to master-0

openstack

keystone-bootstrap-bw6l8

Scheduled

Successfully assigned openstack/keystone-bootstrap-bw6l8 to master-0

openstack

keystone-bootstrap-wdcb6

Scheduled

Successfully assigned openstack/keystone-bootstrap-wdcb6 to master-0

openstack

keystone-cron-29531701-28wv4

Scheduled

Successfully assigned openstack/keystone-cron-29531701-28wv4 to master-0

openstack

keystone-db-create-d7rmf

Scheduled

Successfully assigned openstack/keystone-db-create-d7rmf to master-0

openstack

keystone-db-sync-4z2pz

Scheduled

Successfully assigned openstack/keystone-db-sync-4z2pz to master-0

openstack

ironic-75c678c459-9mmbb

Scheduled

Successfully assigned openstack/ironic-75c678c459-9mmbb to master-0

openstack

memcached-0

Scheduled

Successfully assigned openstack/memcached-0 to master-0

openstack

neutron-55455d5d8d-zzwzz

Scheduled

Successfully assigned openstack/neutron-55455d5d8d-zzwzz to master-0

openstack

neutron-6b46dbc6bf-ngrn9

Scheduled

Successfully assigned openstack/neutron-6b46dbc6bf-ngrn9 to master-0

openshift-insights

insights-operator-59b498fcfb-dbkwd

Scheduled

Successfully assigned openshift-insights/insights-operator-59b498fcfb-dbkwd to master-0

openshift-machine-api

machine-api-operator-5c7cf458b4-dsjgm

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm to master-0

openstack

neutron-7051-account-create-update-2j7gx

Scheduled

Successfully assigned openstack/neutron-7051-account-create-update-2j7gx to master-0

openstack

neutron-db-create-x5qn7

Scheduled

Successfully assigned openstack/neutron-db-create-x5qn7 to master-0

openstack

neutron-db-sync-k6pnr

Scheduled

Successfully assigned openstack/neutron-db-sync-k6pnr to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk to master-0

openstack

ovn-controller-hjmv9

Scheduled

Successfully assigned openstack/ovn-controller-hjmv9 to master-0

openshift-ingress

router-default-7b65dc9fcb-22sgl

Scheduled

Successfully assigned openshift-ingress/router-default-7b65dc9fcb-22sgl to master-0

openshift-ingress

router-default-7b65dc9fcb-22sgl

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

metallb-system

speaker-lbfkl

Scheduled

Successfully assigned metallb-system/speaker-lbfkl to master-0

metallb-system

metallb-operator-webhook-server-559d754c8d-8sgn7

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7 to master-0

openshift-image-registry

node-ca-xrqvm

Scheduled

Successfully assigned openshift-image-registry/node-ca-xrqvm to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-05a5-account-create-update-bt8vb

Scheduled

Successfully assigned openstack/nova-api-05a5-account-create-update-bt8vb to master-0

openstack

nova-api-db-create-pf58r

Scheduled

Successfully assigned openstack/nova-api-db-create-pf58r to master-0

openstack

nova-cell0-7331-account-create-update-4cdxr

Scheduled

Successfully assigned openstack/nova-cell0-7331-account-create-update-4cdxr to master-0

openstack

nova-cell0-cell-mapping-mz9lj

Scheduled

Successfully assigned openstack/nova-cell0-cell-mapping-mz9lj to master-0

openstack

glance-e923-account-create-update-dswn2

Scheduled

Successfully assigned openstack/glance-e923-account-create-update-dswn2 to master-0

openstack

nova-cell0-conductor-0

Scheduled

Successfully assigned openstack/nova-cell0-conductor-0 to master-0

openshift-route-controller-manager

route-controller-manager-6cf66f6dd4-lbnq4

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4 to master-0

metallb-system

metallb-operator-controller-manager-7577845998-zvq74

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-7577845998-zvq74 to master-0

metallb-system

frr-k8s-webhook-server-78b44bf5bb-lthbs

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs to master-0

metallb-system

frr-k8s-gll2f

Scheduled

Successfully assigned metallb-system/frr-k8s-gll2f to master-0

metallb-system

controller-69bbfbf88f-s2t6d

Scheduled

Successfully assigned metallb-system/controller-69bbfbf88f-s2t6d to master-0

cert-manager

cert-manager-webhook-6888856db4-j4m97

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-j4m97 to master-0

cert-manager

cert-manager-cainjector-5545bd876-hhm6l

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-hhm6l to master-0

openstack

nova-cell0-conductor-db-sync-vdhjz

Scheduled

Successfully assigned openstack/nova-cell0-conductor-db-sync-vdhjz to master-0

openstack

nova-cell0-db-create-d8zwm

Scheduled

Successfully assigned openstack/nova-cell0-db-create-d8zwm to master-0

openstack

nova-cell1-cell-mapping-rxn8v

Scheduled

Successfully assigned openstack/nova-cell1-cell-mapping-rxn8v to master-0

openstack

nova-cell1-compute-ironic-compute-0

Scheduled

Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0

openstack

nova-cell1-conductor-0

Scheduled

Successfully assigned openstack/nova-cell1-conductor-0 to master-0

openstack

nova-cell1-conductor-db-sync-7jt69

Scheduled

Successfully assigned openstack/nova-cell1-conductor-db-sync-7jt69 to master-0

openstack

nova-cell1-d8b9-account-create-update-kq9f4

Scheduled

Successfully assigned openstack/nova-cell1-d8b9-account-create-update-kq9f4 to master-0

openstack

nova-cell1-db-create-xrhk2

Scheduled

Successfully assigned openstack/nova-cell1-db-create-xrhk2 to master-0

openstack

nova-cell1-host-discover-stqn6

Scheduled

Successfully assigned openstack/nova-cell1-host-discover-stqn6 to master-0

sushy-emulator

sushy-emulator-84965d5d88-6n2dg

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-84965d5d88-6n2dg to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

cert-manager

cert-manager-545d4d4674-54xdp

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-54xdp to master-0

openshift-cluster-machine-approver

machine-approver-7dd9c7d7b9-sjqsx

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx to master-0

openshift-console

console-79b5f69b87-9qbb4

Scheduled

Successfully assigned openshift-console/console-79b5f69b87-9qbb4 to master-0

openshift-console

console-7db5f64756-h92rx

Scheduled

Successfully assigned openshift-console/console-7db5f64756-h92rx to master-0

openshift-nmstate

nmstate-webhook-866bcb46dc-rft7d

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d to master-0

openshift-nmstate

nmstate-operator-694c9596b7-xp57m

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-xp57m to master-0

openshift-nmstate

nmstate-metrics-58c85c668d-zx9wt

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt to master-0

openshift-nmstate

nmstate-handler-bpzvz

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-bpzvz to master-0

openshift-console

downloads-955b69498-x847l

Scheduled

Successfully assigned openshift-console/downloads-955b69498-x847l to master-0

openshift-nmstate

nmstate-console-plugin-5c78fc5d65-nsdtc

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc to master-0

openshift-authentication

oauth-openshift-95876988f-c58ls

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-95876988f-c58ls to master-0

openshift-cluster-machine-approver

machine-approver-798b897698-rqrlc

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc to master-0

openshift-multus

multus-admission-controller-5f54bf67d4-ctssl

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f54bf67d4-ctssl to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d to master-0

openshift-operators

observability-operator-59bdc8b94-8lklf

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-8lklf to master-0

openshift-operators

perses-operator-5bf474d74f-rpqh9

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-rpqh9 to master-0

openshift-cluster-storage-operator

cluster-storage-operator-f94476f49-c5wlk

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk to master-0

openshift-cloud-credential-operator

cloud-credential-operator-6968c58f46-fcr59

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59 to master-0

openshift-cluster-samples-operator

cluster-samples-operator-65c5c48b9b-bkc9s

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s to master-0

openshift-route-controller-manager

route-controller-manager-676fddcd58-49xzd

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-676fddcd58-49xzd

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd to master-0

openshift-authentication

oauth-openshift-7f7cbb95f8-pfw2n

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-7f7cbb95f8-pfw2n

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-console

console-576f8c76bf-2xx46

Scheduled

Successfully assigned openshift-console/console-576f8c76bf-2xx46 to master-0

openstack-operators

watcher-operator-controller-manager-bccc79885-4pjvq

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq to master-0

openshift-authentication

oauth-openshift-7f7cbb95f8-pfw2n

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531715-pdf5j

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531700-q4sct

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct to master-0

sushy-emulator

nova-console-poller-67cbf9ddc7-sbfjc

Scheduled

Successfully assigned sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531685-l2l87

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531670-t652n

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n to master-0

openshift-console

console-5d9776c47f-6p4nc

Scheduled

Successfully assigned openshift-console/console-5d9776c47f-6p4nc to master-0

openstack

root-account-create-update-klrwt

Scheduled

Successfully assigned openstack/root-account-create-update-klrwt to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531655-kw6fn

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531640-kptmw

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw to master-0

openstack-operators

watcher-operator-controller-manager-bccc79885-4pjvq

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq to master-0

openstack-operators

test-operator-controller-manager-5dc6794d5b-4djnj

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj to master-0

openshift-monitoring

prometheus-operator-754bc4d665-66lml

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-754bc4d665-66lml to master-0

openstack-operators

telemetry-operator-controller-manager-589c568786-kwb4z

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw to master-0

openstack-operators

swift-operator-controller-manager-68f46476f-pztlf

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-pztlf to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt to master-0

openstack-operators

placement-operator-controller-manager-8497b45c89-nn47h

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h to master-0

openstack-operators

ovn-operator-controller-manager-5955d8c787-55b7d

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d to master-0

openstack-operators

openstack-operator-index-kxwsj

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-kxwsj to master-0

openstack-operators

openstack-operator-index-jjl54

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-jjl54 to master-0

openstack-operators

swift-operator-controller-manager-68f46476f-pztlf

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-pztlf to master-0

openstack-operators

openstack-operator-controller-manager-5dc486cffc-q59hq

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq to master-0

openstack-operators

openstack-operator-controller-init-55c649df44-lm7cq

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz to master-0

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-74cdr

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr to master-0

openstack-operators

nova-operator-controller-manager-567668f5cf-nffrm

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm to master-0

openstack-operators

neutron-operator-controller-manager-6bd4687957-lwlws

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws to master-0

openstack-operators

mariadb-operator-controller-manager-6994f66f48-5xt4j

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j to master-0

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk to master-0

openstack-operators

telemetry-operator-controller-manager-589c568786-kwb4z

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z to master-0

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-9gkp2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2 to master-0

openshift-monitoring

telemeter-client-cc55f5fb6-hcn4g

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g to master-0

openshift-monitoring

thanos-querier-69565684c5-snfqm

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-69565684c5-snfqm to master-0

openshift-machine-api

machine-api-operator-5c7cf458b4-dsjgm

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm to master-0

openshift-machine-config-operator

machine-config-controller-54cb48566c-xzpl4

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4 to master-0

openshift-machine-config-operator

machine-config-daemon-hfpql

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-hfpql to master-0

openshift-multus

cni-sysctl-allowlist-ds-75qmm

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-75qmm to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw to master-0

sushy-emulator

sushy-emulator-78f6d7d749-q2bh9

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-78f6d7d749-q2bh9 to master-0

openshift-machine-config-operator

machine-config-server-drf28

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-drf28 to master-0

openshift-marketplace

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf to master-0

openshift-marketplace

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Scheduled

Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx to master-0

openshift-marketplace

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Scheduled

Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

openstack-cell1-galera-0

Scheduled

Successfully assigned openstack/openstack-cell1-galera-0 to master-0

openstack

openstack-galera-0

Scheduled

Successfully assigned openstack/openstack-galera-0 to master-0

openstack

openstackclient

Scheduled

Successfully assigned openstack/openstackclient to master-0

openshift-ingress-canary

ingress-canary-jjpsc

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-jjpsc to master-0

openstack

swift-storage-0

Scheduled

Successfully assigned openstack/swift-storage-0 to master-0

openstack-operators

test-operator-controller-manager-5dc6794d5b-4djnj

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj to master-0

openstack

ovn-northd-0

Scheduled

Successfully assigned openstack/ovn-northd-0 to master-0

openstack

ovsdbserver-nb-0

Scheduled

Successfully assigned openstack/ovsdbserver-nb-0 to master-0

openstack

ovsdbserver-sb-0

Scheduled

Successfully assigned openstack/ovsdbserver-sb-0 to master-0

openstack

placement-07e8-account-create-update-4xjm5

Scheduled

Successfully assigned openstack/placement-07e8-account-create-update-4xjm5 to master-0

openshift-console-operator

console-operator-5df5ffc47c-gmjbd

Scheduled

Successfully assigned openshift-console-operator/console-operator-5df5ffc47c-gmjbd to master-0

openstack

placement-db-create-cjhw4

Scheduled

Successfully assigned openstack/placement-db-create-cjhw4 to master-0

openstack

placement-db-sync-njvpx

Scheduled

Successfully assigned openstack/placement-db-sync-njvpx to master-0

openshift-network-diagnostics

network-check-source-58fb6744f5-l4wh6

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6 to master-0

openstack-operators

manila-operator-controller-manager-67d996989d-psxsg

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-psxsg to master-0

openstack

placement-f597cf46d-llslv

Scheduled

Successfully assigned openstack/placement-f597cf46d-llslv to master-0

openshift-network-diagnostics

network-check-source-58fb6744f5-l4wh6

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-storage

lvms-operator-7bbcf6487b-nkgxz

Scheduled

Successfully assigned openshift-storage/lvms-operator-7bbcf6487b-nkgxz to master-0

openstack-operators

keystone-operator-controller-manager-b4d948c87-ws6cb

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb to master-0

openstack

rabbitmq-cell1-server-0

Scheduled

Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0

openstack

rabbitmq-server-0

Scheduled

Successfully assigned openstack/rabbitmq-server-0 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29531640-kptmw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-route-controller-manager

route-controller-manager-6cf66f6dd4-lbnq4

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openstack-operators

ironic-operator-controller-manager-554564d7fc-hksp2

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2 to master-0

openstack

swift-proxy-675fbd6d58-pdtfj

Scheduled

Successfully assigned openstack/swift-proxy-675fbd6d58-pdtfj to master-0

metallb-system

controller-69bbfbf88f-s2t6d

Scheduled

Successfully assigned metallb-system/controller-69bbfbf88f-s2t6d to master-0

openstack

swift-ring-rebalance-gm5ph

Scheduled

Successfully assigned openstack/swift-ring-rebalance-gm5ph to master-0

openstack

ovn-controller-metrics-5w4cf

Scheduled

Successfully assigned openstack/ovn-controller-metrics-5w4cf to master-0

openstack-operators

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Scheduled

Successfully assigned openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq to master-0

openstack

glance-db-sync-dnhq7

Scheduled

Successfully assigned openstack/glance-db-sync-dnhq7 to master-0

openstack

glance-db-create-jl696

Scheduled

Successfully assigned openstack/glance-db-create-jl696 to master-0

openstack-operators

barbican-operator-controller-manager-868647ff47-2ldv2

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2 to master-0

openstack

glance-8705a-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-8705a-default-internal-api-0 to master-0

openstack

glance-8705a-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-8705a-default-internal-api-0 to master-0

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-b72xt

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt to master-0

openstack

glance-8705a-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-8705a-default-internal-api-0 to master-0

metallb-system

frr-k8s-gll2f

Scheduled

Successfully assigned metallb-system/frr-k8s-gll2f to master-0

openstack

glance-8705a-default-internal-api-0

FailedScheduling

running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods "glance-8705a-default-internal-api-0": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/openstack/glance-8705a-default-internal-api-0, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 91d1e2ac-734d-40c3-81e7-7691258ebb69, UID in object meta: d6814476-68e8-4541-91b5-e5a159982ff5

openstack-operators

designate-operator-controller-manager-6d8bf5c495-dzbvc

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc to master-0

openstack

glance-8705a-default-external-api-0

Scheduled

Successfully assigned openstack/glance-8705a-default-external-api-0 to master-0

openstack

glance-8705a-default-external-api-0

Scheduled

Successfully assigned openstack/glance-8705a-default-external-api-0 to master-0

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zfd69

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69 to master-0

openstack

glance-8705a-default-external-api-0

FailedScheduling

running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods "glance-8705a-default-external-api-0": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/openstack/glance-8705a-default-external-api-0, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 99408c78-e16d-4882-ad93-628ea6ecbb93, UID in object meta: 646d5895-0594-419d-bc57-3beb2730117e

openstack

glance-8705a-default-external-api-0

Scheduled

Successfully assigned openstack/glance-8705a-default-external-api-0 to master-0

openstack-operators

heat-operator-controller-manager-69f49c598c-5t6bt

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt to master-0

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-49gvb

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb to master-0

openstack

dnsmasq-dns-bc7f9869-4kmll

Scheduled

Successfully assigned openstack/dnsmasq-dns-bc7f9869-4kmll to master-0

openstack-operators

infra-operator-controller-manager-5f879c76b6-2kk8t

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t to master-0

openstack

dnsmasq-dns-84556f859-6lpst

Scheduled

Successfully assigned openstack/dnsmasq-dns-84556f859-6lpst to master-0

openstack

dnsmasq-dns-7f74bd995c-jflbg

Scheduled

Successfully assigned openstack/dnsmasq-dns-7f74bd995c-jflbg to master-0

openstack

dnsmasq-dns-7d4c486879-cr468

Scheduled

Successfully assigned openstack/dnsmasq-dns-7d4c486879-cr468 to master-0

openstack-operators

ironic-operator-controller-manager-554564d7fc-hksp2

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2 to master-0

openstack

dnsmasq-dns-7cc6c67c77-h5cpc

Scheduled

Successfully assigned openstack/dnsmasq-dns-7cc6c67c77-h5cpc to master-0

openstack

dnsmasq-dns-7c45d57b9c-jf69p

Scheduled

Successfully assigned openstack/dnsmasq-dns-7c45d57b9c-jf69p to master-0

openstack

dnsmasq-dns-79745f7855-j9vwf

Scheduled

Successfully assigned openstack/dnsmasq-dns-79745f7855-j9vwf to master-0

openstack-operators

keystone-operator-controller-manager-b4d948c87-ws6cb

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb to master-0

openstack

dnsmasq-dns-7586c46c57-vgvpz

Scheduled

Successfully assigned openstack/dnsmasq-dns-7586c46c57-vgvpz to master-0

openstack

dnsmasq-dns-6f6fd9d5d9-zff6h

Scheduled

Successfully assigned openstack/dnsmasq-dns-6f6fd9d5d9-zff6h to master-0

openstack-operators

manila-operator-controller-manager-67d996989d-psxsg

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-psxsg to master-0

openstack

dnsmasq-dns-6bc5ccc685-kl2f6

Scheduled

Successfully assigned openstack/dnsmasq-dns-6bc5ccc685-kl2f6 to master-0

openstack

dnsmasq-dns-6b45666449-v77b5

Scheduled

Successfully assigned openstack/dnsmasq-dns-6b45666449-v77b5 to master-0

openstack-operators

mariadb-operator-controller-manager-6994f66f48-5xt4j

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j to master-0

metallb-system

frr-k8s-webhook-server-78b44bf5bb-lthbs

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs to master-0

openstack

dnsmasq-dns-6974cff98c-qbhgh

Scheduled

Successfully assigned openstack/dnsmasq-dns-6974cff98c-qbhgh to master-0

openstack

dnsmasq-dns-679f75d775-s56hh

Scheduled

Successfully assigned openstack/dnsmasq-dns-679f75d775-s56hh to master-0

openstack-operators

neutron-operator-controller-manager-6bd4687957-lwlws

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws to master-0

openstack

dnsmasq-dns-674dc645f-b7fhr

Scheduled

Successfully assigned openstack/dnsmasq-dns-674dc645f-b7fhr to master-0

openstack

dnsmasq-dns-64b4994945-klvx7

Scheduled

Successfully assigned openstack/dnsmasq-dns-64b4994945-klvx7 to master-0

openstack-operators

nova-operator-controller-manager-567668f5cf-nffrm

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm to master-0

openstack

dnsmasq-dns-5b55dc5f67-k2lcw

Scheduled

Successfully assigned openstack/dnsmasq-dns-5b55dc5f67-k2lcw to master-0

openstack

dnsmasq-dns-597f6b8457-gn4tl

Scheduled

Successfully assigned openstack/dnsmasq-dns-597f6b8457-gn4tl to master-0

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-74cdr

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr to master-0

metallb-system

metallb-operator-controller-manager-7577845998-zvq74

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-7577845998-zvq74 to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz to master-0

openstack

cinder-ed6f-account-create-update-kn7d6

Scheduled

Successfully assigned openstack/cinder-ed6f-account-create-update-kn7d6 to master-0

openstack

cinder-db-create-vptkz

Scheduled

Successfully assigned openstack/cinder-db-create-vptkz to master-0

openstack-operators

openstack-operator-controller-init-55c649df44-lm7cq

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq to master-0

openstack

cinder-6ac23-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-6ac23-volume-lvm-iscsi-0 to master-0

openstack-operators

openstack-operator-controller-manager-5dc486cffc-q59hq

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq to master-0

openstack-operators

openstack-operator-index-jjl54

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-jjl54 to master-0

metallb-system

metallb-operator-webhook-server-559d754c8d-8sgn7

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7 to master-0

openshift-console

console-6647cb86fc-wzjr8

Scheduled

Successfully assigned openshift-console/console-6647cb86fc-wzjr8 to master-0

openstack-operators

openstack-operator-index-kxwsj

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-kxwsj to master-0

openstack

cinder-6ac23-scheduler-0

Scheduled

Successfully assigned openstack/cinder-6ac23-scheduler-0 to master-0

openstack

cinder-6ac23-scheduler-0

Scheduled

Successfully assigned openstack/cinder-6ac23-scheduler-0 to master-0

openstack

cinder-6ac23-db-sync-mhchn

Scheduled

Successfully assigned openstack/cinder-6ac23-db-sync-mhchn to master-0

openstack

cinder-6ac23-backup-0

Scheduled

Successfully assigned openstack/cinder-6ac23-backup-0 to master-0

openstack-operators

ovn-operator-controller-manager-5955d8c787-55b7d

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d to master-0

openstack

cinder-6ac23-backup-0

Scheduled

Successfully assigned openstack/cinder-6ac23-backup-0 to master-0

openstack

cinder-6ac23-api-0

Scheduled

Successfully assigned openstack/cinder-6ac23-api-0 to master-0

openstack-operators

placement-operator-controller-manager-8497b45c89-nn47h

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h to master-0

openstack

cinder-6ac23-api-0

Scheduled

Successfully assigned openstack/cinder-6ac23-api-0 to master-0

openshift-storage

vg-manager-q84n6

Scheduled

Successfully assigned openshift-storage/vg-manager-q84n6 to master-0

openshift-storage

lvms-operator-7bbcf6487b-nkgxz

Scheduled

Successfully assigned openshift-storage/lvms-operator-7bbcf6487b-nkgxz to master-0

openshift-operators

perses-operator-5bf474d74f-rpqh9

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-rpqh9 to master-0

openshift-operators

observability-operator-59bdc8b94-8lklf

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-8lklf to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8 to master-0

metallb-system

speaker-lbfkl

Scheduled

Successfully assigned metallb-system/speaker-lbfkl to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-2lpl8

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8 to master-0

openshift-nmstate

nmstate-webhook-866bcb46dc-rft7d

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d to master-0

openshift-nmstate

nmstate-operator-694c9596b7-xp57m

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-xp57m to master-0

openshift-nmstate

nmstate-metrics-58c85c668d-zx9wt

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt to master-0

openshift-nmstate

nmstate-handler-bpzvz

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-bpzvz to master-0

openshift-nmstate

nmstate-console-plugin-5c78fc5d65-nsdtc

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc to master-0

openshift-marketplace

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Scheduled

Successfully assigned openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4 to master-0

openshift-marketplace

certified-operators-brpmb

Scheduled

Successfully assigned openshift-marketplace/certified-operators-brpmb to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-2lpl8

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8 to master-0

openshift-marketplace

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Scheduled

Successfully assigned openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68 to master-0

openstack-operators

infra-operator-controller-manager-5f879c76b6-2kk8t

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t to master-0

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-49gvb

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb to master-0

openstack-operators

heat-operator-controller-manager-69f49c598c-5t6bt

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt to master-0

openshift-marketplace

redhat-marketplace-qqt7p

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-qqt7p to master-0

openshift-marketplace

redhat-operators-4znnj

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-4znnj to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zfd69

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69 to master-0

openstack-operators

designate-operator-controller-manager-6d8bf5c495-dzbvc

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc to master-0

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-b72xt

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt to master-0

cert-manager

cert-manager-545d4d4674-54xdp

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-54xdp to master-0

openstack

ironic-conductor-0

Scheduled

Successfully assigned openstack/ironic-conductor-0 to master-0

openshift-monitoring

kube-state-metrics-59584d565f-f6f26

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-59584d565f-f6f26 to master-0

openshift-monitoring

metrics-server-67ddc7b799-zlnvf

Scheduled

Successfully assigned openshift-monitoring/metrics-server-67ddc7b799-zlnvf to master-0

sushy-emulator

nova-console-recorder-856878b5df-4lhhs

Scheduled

Successfully assigned sushy-emulator/nova-console-recorder-856878b5df-4lhhs to master-0

openshift-monitoring

metrics-server-7b9cc5984b-smpdl

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7b9cc5984b-smpdl to master-0

openshift-monitoring

monitoring-plugin-5d9ddb8754-xtrdd

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd to master-0

openshift-monitoring

node-exporter-2qn8m

Scheduled

Successfully assigned openshift-monitoring/node-exporter-2qn8m to master-0

openshift-monitoring

openshift-state-metrics-6dbff8cb4c-swtr6

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6 to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-monitoring

prometheus-operator-754bc4d665-66lml

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-754bc4d665-66lml to master-0

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-9gkp2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2 to master-0

openshift-monitoring

telemeter-client-cc55f5fb6-hcn4g

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g to master-0

openshift-monitoring

thanos-querier-69565684c5-snfqm

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-69565684c5-snfqm to master-0

openshift-multus

cni-sysctl-allowlist-ds-75qmm

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-75qmm to master-0

openstack

ovn-controller-ovs-lp2wm

Scheduled

Successfully assigned openstack/ovn-controller-ovs-lp2wm to master-0

openstack-operators

barbican-operator-controller-manager-868647ff47-2ldv2

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2 to master-0

openshift-controller-manager

controller-manager-c67bf58c9-mn7dg

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-c67bf58c9-mn7dg to master-0

openshift-controller-manager

controller-manager-c67bf58c9-mn7dg

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-c67bf58c9-mn7dg

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-multus

multus-admission-controller-5f54bf67d4-ctssl

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f54bf67d4-ctssl to master-0

openstack-operators

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Scheduled

Successfully assigned openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq to master-0

openshift-storage

vg-manager-q84n6

Scheduled

Successfully assigned openshift-storage/vg-manager-q84n6 to master-0

openshift-controller-manager

controller-manager-56b6d9c5b7-lxwt6

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6 to master-0

openshift-controller-manager

controller-manager-56b6d9c5b7-lxwt6

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-network-console

networking-console-plugin-79f587d78f-6bkc6

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-79f587d78f-6bkc6 to master-0

kube-system

Required control plane pods have been created

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_a433f812-0f07-44e8-8417-5ee42aa48288 became leader

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_1f34b8ad-b467-46d7-88e7-7bd90052c50c became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_6a166aba-f7d4-4eb6-be6d-5543663f7b58 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_3d540c35-eca6-40ff-b397-21d8a2a81448 became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_734fd319-d48c-4fef-8b50-37e6ccb2b1ec became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace
(x2)

assisted-installer

job-controller

assisted-installer-controller

FailedCreate

Error creating: pods "assisted-installer-controller-" is forbidden: error looking up service account assisted-installer/assisted-installer-controller: serviceaccount "assisted-installer-controller" not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-f2lj9

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_8205d5e3-42b1-4837-8ace-77d4117db672 became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_8205d5e3-42b1-4837-8ace-77d4117db672 stopped leading

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-5cfd9759cf to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_56e75091-16c4-4d83-b1e9-2825a9684c9f became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-7d7db75979 to 1

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-7bcfbc574b to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-77cd4d9559 to 1

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-c48c8bf7c to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-8c7d49845 to 1

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-5bd7768f54 to 1

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-8586dccc9b to 1

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-545bf96f4d to 1

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-6f5488b997 to 1
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-fc889cfd5 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-584cc7bcb5 to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-5bd7c86784 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace
(x12)

openshift-network-operator

replicaset-controller

network-operator-7d7db75979

FailedCreate

Error creating: pods "network-operator-7d7db75979-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

assisted-installer

default-scheduler

assisted-installer-controller-f2lj9

FailedScheduling

no nodes available to schedule pods

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-7bcfbc574b

FailedCreate

Error creating: pods "kube-controller-manager-operator-7bcfbc574b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-77cd4d9559

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-77cd4d9559-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-8c7d49845

FailedCreate

Error creating: pods "dns-operator-8c7d49845-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-c48c8bf7c

FailedCreate

Error creating: pods "service-ca-operator-c48c8bf7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-8586dccc9b

FailedCreate

Error creating: pods "openshift-apiserver-operator-8586dccc9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-fc889cfd5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-fc889cfd5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-5bd7768f54

FailedCreate

Error creating: pods "cluster-olm-operator-5bd7768f54-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-545bf96f4d

FailedCreate

Error creating: pods "etcd-operator-545bf96f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-584cc7bcb5

FailedCreate

Error creating: pods "openshift-controller-manager-operator-584cc7bcb5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-5bd7c86784

FailedCreate

Error creating: pods "authentication-operator-5bd7c86784-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-bcf775fc9 to 1
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-6f5488b997

FailedCreate

Error creating: pods "marketplace-operator-6f5488b997-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-6fb4df594f to 1

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-bcf775fc9 to 1
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

FailedCreate

Error creating: pods "cluster-version-operator-5cfd9759cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-6bb6d78bf to 1

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-6bb6d78bf to 1

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-6569778c84 to 1

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-5c75f78c8b to 1

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-5d87bf58c to 1

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-5499d7f7bb to 1

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-779979bdf7 to 1

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-6f47d587d6 to 1
(x9)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-779979bdf7

FailedCreate

Error creating: pods "cluster-image-registry-operator-779979bdf7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5d87bf58c

FailedCreate

Error creating: pods "kube-apiserver-operator-5d87bf58c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c75f78c8b

FailedCreate

Error creating: pods "package-server-manager-5c75f78c8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-596f79dd6f to 1

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening
(x11)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-596f79dd6f

FailedCreate

Error creating: pods "catalog-operator-596f79dd6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-d6bb9bb76 to 1
(x7)

openshift-machine-config-operator

replicaset-controller

machine-config-operator-7f8c75f984

FailedCreate

Error creating: pods "machine-config-operator-7f8c75f984-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-7f8c75f984 to 1

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-d6bb9bb76 to 1
(x8)

openshift-config-operator

replicaset-controller

openshift-config-operator-6f47d587d6

FailedCreate

Error creating: pods "openshift-config-operator-6f47d587d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

FailedCreate

Error creating: pods "cluster-baremetal-operator-d6bb9bb76-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

FailedCreate

Error creating: pods "cluster-baremetal-operator-d6bb9bb76-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

Required control plane pods have been created
(x10)

openshift-ingress-operator

replicaset-controller

ingress-operator-6569778c84

FailedCreate

Error creating: pods "ingress-operator-6569778c84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-5499d7f7bb

FailedCreate

Error creating: pods "olm-operator-5499d7f7bb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving
(x11)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6fb4df594f

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-6fb4df594f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x11)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_0d8ffad3-e8f1-41ab-8368-4e86b68247e3 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_33db0664-0387-49fe-ac6c-bca5a5b05899 became leader

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531640

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_dff887f3-3d08-4226-bb7c-82c8b734951f became leader
(x6)

assisted-installer

default-scheduler

assisted-installer-controller-f2lj9

FailedScheduling

no nodes available to schedule pods
(x6)

openshift-marketplace

replicaset-controller

marketplace-operator-6f5488b997

FailedCreate

Error creating: pods "marketplace-operator-6f5488b997-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-596f79dd6f

FailedCreate

Error creating: pods "catalog-operator-596f79dd6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

FailedCreate

Error creating: pods "cluster-baremetal-operator-d6bb9bb76-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x5)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-c48c8bf7c

FailedCreate

Error creating: pods "service-ca-operator-c48c8bf7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x5)

openshift-network-operator

replicaset-controller

network-operator-7d7db75979

FailedCreate

Error creating: pods "network-operator-7d7db75979-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-5499d7f7bb

FailedCreate

Error creating: pods "olm-operator-5499d7f7bb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-5bd7768f54

FailedCreate

Error creating: pods "cluster-olm-operator-5bd7768f54-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-8586dccc9b

FailedCreate

Error creating: pods "openshift-apiserver-operator-8586dccc9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

FailedCreate

Error creating: pods "cluster-baremetal-operator-d6bb9bb76-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c75f78c8b

FailedCreate

Error creating: pods "package-server-manager-5c75f78c8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-etcd-operator

replicaset-controller

etcd-operator-545bf96f4d

FailedCreate

Error creating: pods "etcd-operator-545bf96f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-config-operator

replicaset-controller

openshift-config-operator-6f47d587d6

FailedCreate

Error creating: pods "openshift-config-operator-6f47d587d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-dns-operator

replicaset-controller

dns-operator-8c7d49845

FailedCreate

Error creating: pods "dns-operator-8c7d49845-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-7bcfbc574b

FailedCreate

Error creating: pods "kube-controller-manager-operator-7bcfbc574b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-584cc7bcb5

FailedCreate

Error creating: pods "openshift-controller-manager-operator-584cc7bcb5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x3)

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531640

FailedCreate

Error creating: pods "collect-profiles-29531640-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6fb4df594f

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-6fb4df594f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5d87bf58c

FailedCreate

Error creating: pods "kube-apiserver-operator-5d87bf58c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-authentication-operator

replicaset-controller

authentication-operator-5bd7c86784

FailedCreate

Error creating: pods "authentication-operator-5bd7c86784-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

FailedCreate

Error creating: pods "cluster-version-operator-5cfd9759cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-5499d7f7bb

SuccessfulCreate

Created pod: olm-operator-5499d7f7bb-5g6nc
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

FailedCreate

Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-5c75f78c8b-2hllb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

SuccessfulCreate

Created pod: cluster-baremetal-operator-d6bb9bb76-k98fq
(x7)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-77cd4d9559

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-77cd4d9559-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-ingress-operator

replicaset-controller

ingress-operator-6569778c84

FailedCreate

Error creating: pods "ingress-operator-6569778c84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-machine-config-operator

replicaset-controller

machine-config-operator-7f8c75f984

FailedCreate

Error creating: pods "machine-config-operator-7f8c75f984-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-machine-api

default-scheduler

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-d6bb9bb76

SuccessfulCreate

Created pod: cluster-baremetal-operator-d6bb9bb76-k98fq
(x7)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-fc889cfd5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-fc889cfd5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-5499d7f7bb-5g6nc

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c75f78c8b

SuccessfulCreate

Created pod: package-server-manager-5c75f78c8b-2hllb

openshift-service-ca-operator

default-scheduler

service-ca-operator-c48c8bf7c-6fqkr

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x7)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-779979bdf7

FailedCreate

Error creating: pods "cluster-image-registry-operator-779979bdf7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-service-ca-operator

replicaset-controller

service-ca-operator-c48c8bf7c

SuccessfulCreate

Created pod: service-ca-operator-c48c8bf7c-6fqkr

openshift-machine-api

default-scheduler

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-596f79dd6f

SuccessfulCreate

Created pod: catalog-operator-596f79dd6f-8cg5c

openshift-cluster-version

default-scheduler

cluster-version-operator-5cfd9759cf-v5tpt

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt to master-0

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6fb4df594f

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-6fb4df594f-c95qc

openshift-network-operator

default-scheduler

network-operator-7d7db75979-drrqm

Scheduled

Successfully assigned openshift-network-operator/network-operator-7d7db75979-drrqm to master-0

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-5bd7768f54

SuccessfulCreate

Created pod: cluster-olm-operator-5bd7768f54-7wc6k

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

SuccessfulCreate

Created pod: cluster-version-operator-5cfd9759cf-v5tpt

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-6fb4df594f-c95qc

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-dns-operator

default-scheduler

dns-operator-8c7d49845-hxcn2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-authentication-operator

replicaset-controller

authentication-operator-5bd7c86784

SuccessfulCreate

Created pod: authentication-operator-5bd7c86784-46vmq

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-8586dccc9b

SuccessfulCreate

Created pod: openshift-apiserver-operator-8586dccc9b-sl5hz

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-596f79dd6f-8cg5c

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

SuccessfulCreate

Created pod: cluster-node-tuning-operator-bcf775fc9-8x6sd

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bcf775fc9

SuccessfulCreate

Created pod: cluster-node-tuning-operator-bcf775fc9-8x6sd

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-5bd7768f54-7wc6k

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

default-scheduler

marketplace-operator-6f5488b997-4qf9p

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

replicaset-controller

marketplace-operator-6f5488b997

SuccessfulCreate

Created pod: marketplace-operator-6f5488b997-4qf9p

openshift-authentication-operator

default-scheduler

authentication-operator-5bd7c86784-46vmq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-dns-operator

replicaset-controller

dns-operator-8c7d49845

SuccessfulCreate

Created pod: dns-operator-8c7d49845-hxcn2

openshift-network-operator

replicaset-controller

network-operator-7d7db75979

SuccessfulCreate

Created pod: network-operator-7d7db75979-drrqm

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-8586dccc9b-sl5hz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

kubelet

network-operator-7d7db75979-drrqm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83"

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-584cc7bcb5-c7fgn

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5d87bf58c

SuccessfulCreate

Created pod: kube-apiserver-operator-5d87bf58c-2492q

openshift-etcd-operator

replicaset-controller

etcd-operator-545bf96f4d

SuccessfulCreate

Created pod: etcd-operator-545bf96f4d-jb9vb

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-7bcfbc574b-tl97n

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-config-operator

replicaset-controller

openshift-config-operator-6f47d587d6

SuccessfulCreate

Created pod: openshift-config-operator-6f47d587d6-ccrxg

openshift-config-operator

default-scheduler

openshift-config-operator-6f47d587d6-ccrxg

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-5d87bf58c-2492q

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-config-operator

replicaset-controller

machine-config-operator-7f8c75f984

SuccessfulCreate

Created pod: machine-config-operator-7f8c75f984-ffnq7

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

SuccessfulCreate

Created pod: cluster-monitoring-operator-6bb6d78bf-fkzdb

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-6bb6d78bf

SuccessfulCreate

Created pod: cluster-monitoring-operator-6bb6d78bf-fkzdb

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-584cc7bcb5

SuccessfulCreate

Created pod: openshift-controller-manager-operator-584cc7bcb5-c7fgn

openshift-machine-config-operator

default-scheduler

machine-config-operator-7f8c75f984-ffnq7

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress-operator

replicaset-controller

ingress-operator-6569778c84

SuccessfulCreate

Created pod: ingress-operator-6569778c84-6dlqb

openshift-etcd-operator

default-scheduler

etcd-operator-545bf96f4d-jb9vb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress-operator

default-scheduler

ingress-operator-6569778c84-6dlqb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-7bcfbc574b

SuccessfulCreate

Created pod: kube-controller-manager-operator-7bcfbc574b-tl97n

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-77cd4d9559

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-77cd4d9559-8tttg

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-fc889cfd5

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-fc889cfd5-xdws2

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-779979bdf7

SuccessfulCreate

Created pod: cluster-image-registry-operator-779979bdf7-d7sx4

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-77cd4d9559-8tttg

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

default-scheduler

collect-profiles-29531640-kptmw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531640

SuccessfulCreate

Created pod: collect-profiles-29531640-kptmw

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-fc889cfd5-xdws2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

default-scheduler

cluster-monitoring-operator-6bb6d78bf-fkzdb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

default-scheduler

cluster-monitoring-operator-6bb6d78bf-fkzdb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-image-registry

default-scheduler

cluster-image-registry-operator-779979bdf7-d7sx4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

kubelet

network-operator-7d7db75979-drrqm

Started

Started container network-operator

openshift-network-operator

kubelet

network-operator-7d7db75979-drrqm

Created

Created container: network-operator

openshift-network-operator

kubelet

network-operator-7d7db75979-drrqm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" in 3.506s (3.506s including waiting). Image size: 621542709 bytes.

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_edd6aa08-5f71-459a-8ba8-5636dda0a6c9 became leader
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio

assisted-installer

default-scheduler

assisted-installer-controller-f2lj9

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-f2lj9 to master-0

assisted-installer

kubelet

assisted-installer-controller-f2lj9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8"

openshift-network-operator

default-scheduler

mtu-prober-dm7d7

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-dm7d7 to master-0

openshift-network-operator

kubelet

mtu-prober-dm7d7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-dm7d7

openshift-network-operator

kubelet

mtu-prober-dm7d7

Started

Started container prober

openshift-network-operator

kubelet

mtu-prober-dm7d7

Created

Created container: prober

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

assisted-installer

kubelet

assisted-installer-controller-f2lj9

Created

Created container: assisted-installer-controller

assisted-installer

kubelet

assisted-installer-controller-f2lj9

Started

Started container assisted-installer-controller

assisted-installer

kubelet

assisted-installer-controller-f2lj9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8" in 4.863s (4.863s including waiting). Image size: 687849728 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

openshift-multus

default-scheduler

multus-7fbjw

Scheduled

Successfully assigned openshift-multus/multus-7fbjw to master-0

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-7fbjw

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-7fbjw

openshift-multus

default-scheduler

multus-7fbjw

Scheduled

Successfully assigned openshift-multus/multus-7fbjw to master-0

openshift-multus

default-scheduler

multus-additional-cni-plugins-jtdht

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-jtdht to master-0

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-jtdht

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-tntcf

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-tntcf

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec"

openshift-multus

default-scheduler

multus-additional-cni-plugins-jtdht

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-jtdht to master-0

openshift-multus

kubelet

multus-7fbjw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec"

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-jtdht

openshift-multus

kubelet

multus-7fbjw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd"

openshift-multus

default-scheduler

network-metrics-daemon-tntcf

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-tntcf to master-0

openshift-multus

default-scheduler

network-metrics-daemon-tntcf

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-tntcf to master-0

openshift-multus

default-scheduler

multus-admission-controller-5f98f4f8d5-dg77f

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

default-scheduler

multus-admission-controller-5f98f4f8d5-dg77f

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulCreate

Created pod: multus-admission-controller-5f98f4f8d5-dg77f

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f98f4f8d5 to 1

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulCreate

Created pod: multus-admission-controller-5f98f4f8d5-dg77f

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f98f4f8d5 to 1

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" in 2.77s (2.77s including waiting). Image size: 528829499 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" in 2.77s (2.77s including waiting). Image size: 528829499 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container egress-router-binary-copy

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-ovn-kubernetes

default-scheduler

ovnkube-control-plane-5d8dfcdc87-bb22k

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k to master-0

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-m5kbp

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-m5kbp to master-0

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-5d8dfcdc87 to 1

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-5d8dfcdc87

SuccessfulCreate

Created pod: ovnkube-control-plane-5d8dfcdc87-bb22k

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-m5kbp

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" in 10.57s (10.57s including waiting). Image size: 682963466 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Started

Started container kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Created

Created container: kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-7fbjw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" in 14.096s (14.097s including waiting). Image size: 1237794314 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-7fbjw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" in 14.096s (14.097s including waiting). Image size: 1237794314 bytes.

openshift-multus

kubelet

multus-7fbjw

Created

Created container: kube-multus

openshift-multus

kubelet

multus-7fbjw

Started

Started container kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" in 10.57s (10.57s including waiting). Image size: 682963466 bytes.

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-58fb6744f5 to 1

openshift-network-diagnostics

replicaset-controller

network-check-source-58fb6744f5

SuccessfulCreate

Created pod: network-check-source-58fb6744f5-l4wh6

openshift-network-diagnostics

default-scheduler

network-check-source-58fb6744f5-l4wh6

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

kubelet

multus-7fbjw

Started

Started container kube-multus

openshift-multus

kubelet

multus-7fbjw

Created

Created container: kube-multus

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" in 891ms (891ms including waiting). Image size: 411485245 bytes.

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-54b95

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" in 891ms (891ms including waiting). Image size: 411485245 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: bond-cni-plugin

openshift-network-diagnostics

default-scheduler

network-check-target-54b95

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-54b95 to master-0

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0"

openshift-network-node-identity

default-scheduler

network-node-identity-p5b6q

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-p5b6q to master-0

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-p5b6q

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" in 2.003s (2.003s including waiting). Image size: 407241636 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" in 2.003s (2.003s including waiting). Image size: 407241636 bytes.

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: routeoverride-cni

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021"

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 17.38s (17.38s including waiting). Image size: 1637274270 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" in 12.039s (12.039s including waiting). Image size: 875998518 bytes.

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 12.286s (12.286s including waiting). Image size: 1637274270 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" in 12.039s (12.039s including waiting). Image size: 875998518 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 17.583s (17.583s including waiting). Image size: 1637274270 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Started

Started container ovnkube-cluster-manager

openshift-network-node-identity

master-0_396c7f5d-6d66-49ca-b9a0-99d08d1499de

ovnkube-identity

LeaderElection

master-0_396c7f5d-6d66-49ca-b9a0-99d08d1499de became leader

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Started

Started container approver

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Created

Created container: ovnkube-cluster-manager

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Created

Created container: approver

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5d8dfcdc87-bb22k became leader

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Started

Started container webhook

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Created

Created container: webhook
(x7)

openshift-multus

kubelet

network-metrics-daemon-tntcf

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered
(x7)

openshift-multus

kubelet

network-metrics-daemon-tntcf

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container ovn-acl-logging

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container northd

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: kube-multus-additional-cni-plugins
(x18)

openshift-multus

kubelet

network-metrics-daemon-tntcf

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x18)

openshift-multus

kubelet

network-metrics-daemon-tntcf

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-multus

kubelet

multus-additional-cni-plugins-jtdht

Created

Created container: kube-multus-additional-cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-m5kbp

Started

Started container sbdb

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-m5kbp

default

ovnkube-csr-approver-controller

csr-vqssv

CSRApproved

CSR "csr-vqssv" has been approved

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-rg9r6

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-rg9r6 to master-0

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-rg9r6

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-rg9r6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine

openshift-network-diagnostics

ovnk-controlplane

network-check-target-54b95

ErrorUpdatingResource

addLogicalPort failed for openshift-network-diagnostics/network-check-target-54b95: failed to create ops to update SNAT for pods of router: GR_master-0, error: unable to get NAT entries for router &{UUID: Copp:<nil> Enabled:<nil> ExternalIDs:map[] LoadBalancer:[] LoadBalancerGroup:[] Name:GR_master-0 Nat:[] Options:map[] Policies:[] Ports:[] StaticRoutes:[]}: failed to get router: GR_master-0, error: object not found

openshift-network-diagnostics

ovnk-controlplane

network-check-target-54b95

ErrorAddingResource

addLogicalPort failed for openshift-network-diagnostics/network-check-target-54b95: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0"
(x7)

openshift-network-diagnostics

kubelet

network-check-target-54b95

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-nn8hz" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-v5tpt

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-multus

ovnk-controlplane

network-metrics-daemon-tntcf

ErrorUpdatingResource

addLogicalPort failed for openshift-multus/network-metrics-daemon-tntcf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0"

default

ovnk-controlplane

master-0

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0]

openshift-multus

ovnk-controlplane

network-metrics-daemon-tntcf

ErrorAddingResource

addLogicalPort failed for openshift-multus/network-metrics-daemon-tntcf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0"

openshift-multus

ovnk-controlplane

network-metrics-daemon-tntcf

ErrorAddingResource

addLogicalPort failed for openshift-multus/network-metrics-daemon-tntcf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0"

openshift-multus

ovnk-controlplane

network-metrics-daemon-tntcf

ErrorUpdatingResource

addLogicalPort failed for openshift-multus/network-metrics-daemon-tntcf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0"
(x18)

openshift-network-diagnostics

kubelet

network-check-target-54b95

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-sr4f6

CSRApproved

CSR "csr-sr4f6" has been approved

openshift-multus

default-scheduler

multus-admission-controller-5f98f4f8d5-dg77f

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f to master-0

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-77cd4d9559-8tttg

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg to master-0

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-5bd7768f54-7wc6k

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k to master-0

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-fc889cfd5-xdws2

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2 to master-0

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-5499d7f7bb-5g6nc

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc to master-0

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-596f79dd6f-8cg5c

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c to master-0

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-bcf775fc9-8x6sd

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd to master-0

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-584cc7bcb5-c7fgn

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn to master-0

openshift-network-operator

default-scheduler

iptables-alerter-rjbl5

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-rjbl5 to master-0

openshift-machine-api

default-scheduler

cluster-baremetal-operator-d6bb9bb76-k98fq

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq to master-0

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-6fb4df594f-c95qc

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc to master-0

openshift-monitoring

default-scheduler

cluster-monitoring-operator-6bb6d78bf-fkzdb

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb to master-0

openshift-machine-config-operator

default-scheduler

machine-config-operator-7f8c75f984-ffnq7

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7 to master-0

openshift-marketplace

default-scheduler

marketplace-operator-6f5488b997-4qf9p

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-6f5488b997-4qf9p to master-0

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-8586dccc9b-sl5hz

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz to master-0

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-5d87bf58c-2492q

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q to master-0

openshift-image-registry

default-scheduler

cluster-image-registry-operator-779979bdf7-d7sx4

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4 to master-0

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-7bcfbc574b-tl97n

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n to master-0

openshift-ingress-operator

default-scheduler

ingress-operator-6569778c84-6dlqb

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-6569778c84-6dlqb to master-0

openshift-authentication-operator

default-scheduler

authentication-operator-5bd7c86784-46vmq

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq to master-0

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-5c75f78c8b-2hllb

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb to master-0

openshift-etcd-operator

default-scheduler

etcd-operator-545bf96f4d-jb9vb

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb to master-0

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-rjbl5

openshift-service-ca-operator

default-scheduler

service-ca-operator-c48c8bf7c-6fqkr

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr to master-0

openshift-config-operator

default-scheduler

openshift-config-operator-6f47d587d6-ccrxg

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg to master-0

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-bcf775fc9-8x6sd

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd to master-0

openshift-multus

default-scheduler

multus-admission-controller-5f98f4f8d5-dg77f

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f to master-0

openshift-machine-api

default-scheduler

cluster-baremetal-operator-d6bb9bb76-k98fq

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq to master-0

openshift-monitoring

default-scheduler

cluster-monitoring-operator-6bb6d78bf-fkzdb

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb to master-0

openshift-dns-operator

default-scheduler

dns-operator-8c7d49845-hxcn2

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-8c7d49845-hxcn2 to master-0

openshift-cluster-olm-operator

multus

cluster-olm-operator-5bd7768f54-7wc6k

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-network-operator

kubelet

iptables-alerter-rjbl5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9"

openshift-etcd-operator

multus

etcd-operator-545bf96f4d-jb9vb

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-2492q

Started

Started container kube-apiserver-operator

openshift-authentication-operator

multus

authentication-operator-5bd7c86784-46vmq

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-service-ca-operator

multus

service-ca-operator-c48c8bf7c-6fqkr

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-6fqkr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83"

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-2492q

Created

Created container: kube-apiserver-operator

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-tl97n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac"

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-7bcfbc574b-tl97n

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-2492q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-apiserver-operator

multus

openshift-apiserver-operator-8586dccc9b-sl5hz

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e"

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396"

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-5d87bf58c-2492q

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126"

openshift-config-operator

multus

openshift-config-operator-6f47d587d6-ccrxg

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-c95qc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-6fb4df594f-c95qc

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-584cc7bcb5-c7fgn

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8tttg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7"

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-77cd4d9559-8tttg

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-c7fgn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2"

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-sl5hz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19"

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-fc889cfd5-xdws2

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-xdws2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc"

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.33"

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-5d87bf58c-2492q_084d33f1-a808-4290-b7b5-02eaa636974e became leader
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready",Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.33"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x5)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-d7sx4

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" in 6.601s (6.601s including waiting). Image size: 447940744 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126" in 7.39s (7.39s including waiting). Image size: 438548891 bytes.
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-5g6nc

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found
(x5)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-8cg5c

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x5)

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found
(x5)

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "mco-proxy-tls" not found
(x5)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x5)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-InternalLoadBalancerServing-certrotationcontroller

kube-apiserver-operator

RotationError

configmaps "loadbalancer-serving-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-xdws2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc"

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e"

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-6fqkr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8tttg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7"

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-tl97n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-c7fgn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-sl5hz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Started

Started container copy-catalogd-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Created

Created container: copy-catalogd-manifests

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8tttg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" in 1.418s (1.418s including waiting). Image size: 506291135 bytes.

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-tl97n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" in 1.239s (1.239s including waiting). Image size: 508786786 bytes.

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-xdws2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" in 1.702s (1.702s including waiting). Image size: 504513960 bytes.

openshift-network-operator

kubelet

iptables-alerter-rjbl5

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-network-operator

kubelet

iptables-alerter-rjbl5

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9": rpc error: code = Canceled desc = copying config: context canceled

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" already present on machine

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" in 1.713s (1.714s including waiting). Image size: 513119434 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-sl5hz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" in 1.31s (1.31s including waiting). Image size: 512172666 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-c7fgn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" in 1.418s (1.418s including waiting). Image size: 507867630 bytes.

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" in 1.905s (1.905s including waiting). Image size: 518279996 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126" already present on machine

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Created

Created container: openshift-api

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Started

Started container openshift-api

openshift-network-diagnostics

kubelet

network-check-target-54b95

Started

Started container network-check-target-container

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-6fqkr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" in 1.537s (1.537s including waiting). Image size: 508443359 bytes.

openshift-network-diagnostics

kubelet

network-check-target-54b95

Created

Created container: network-check-target-container

openshift-network-diagnostics

kubelet

network-check-target-54b95

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine

openshift-network-diagnostics

multus

network-check-target-54b95

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well",Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-6fb4df594f-c95qc_3423e480-6af2-480d-ae58-5936c669546a became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-storage-version-migrator

default-scheduler

migrator-5c85bff57-t5rgn

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn to master-0

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-6847bb4785 to 1

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-6847bb4785

SuccessfulCreate

Created pod: csi-snapshot-controller-6847bb4785-8l58x

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-6847bb4785-8l58x

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x to master-0

openshift-kube-storage-version-migrator

replicaset-controller

migrator-5c85bff57

SuccessfulCreate

Created pod: migrator-5c85bff57-t5rgn

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.33"

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-5c85bff57 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-584cc7bcb5-c7fgn_43d48f54-e338-4a22-84e3-c4b38b9128ee became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "build": map[string]any{ +  "buildDefaults": map[string]any{"resources": map[string]any{}}, +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7e373bb5"...), +  }, +  }, +  "controllers": []any{ +  string("openshift.io/build"), string("openshift.io/build-config-change"), +  string("openshift.io/builder-rolebindings"), +  string("openshift.io/builder-serviceaccount"), +  string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +  string("openshift.io/deployer-rolebindings"), +  string("openshift.io/deployer-serviceaccount"), +  string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +  string("openshift.io/image-puller-rolebindings"), +  string("openshift.io/image-signature-import"), +  string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +  string("openshift.io/ingress-to-route"), +  string("openshift.io/origin-namespace"), ..., +  }, +  "deployer": map[string]any{ +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f7696d1b6"...), +  }, +  }, +  "featureGates": []any{string("BuildCSIVolumes=true")}, +  "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-fc889cfd5-xdws2_3fd4451b-543e-440f-a4fd-0c8521309e2f became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-7bcfbc574b-tl97n_55f76986-a3c0-44fb-b278-e3386678554e became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.33"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.33"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-5bd7c86784-46vmq_e8386e41-1d89-48dc-bf7c-c555e9be6fa7 became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.33"

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-8586dccc9b-sl5hz_0e1553fa-b470-4c52-89f7-2f07dfdd6332 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available changed from Unknown to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-77cd4d9559-8tttg_c18b977d-59ed-487a-992a-74864c12ce82 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.33"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.33"}]

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well")

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6c9b8f4d95 to 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-c48c8bf7c-6fqkr_f1940ea3-92ed-49d2-980f-82320915c1a4 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Progressing changed from Unknown to False ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready",Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-controller-manager

replicaset-controller

controller-manager-6c9b8f4d95

SuccessfulCreate

Created pod: controller-manager-6c9b8f4d95-whcm2
(x7)

openshift-controller-manager

replicaset-controller

controller-manager-6c9b8f4d95

FailedCreate

Error creating: pods "controller-manager-6c9b8f4d95-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.33"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-545bf96f4d-jb9vb_445afc0c-bd78-4b7e-b96f-268f5b99fb3e became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-t5rgn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015"

openshift-kube-storage-version-migrator

multus

migrator-5c85bff57-t5rgn

AddedInterface

Add eth0 [10.128.0.30/23] from ovn-kubernetes

openshift-controller-manager

default-scheduler

controller-manager-6c9b8f4d95-whcm2

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2 to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-8l58x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9"

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreateFailed

Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-cluster-storage-operator

multus

csi-snapshot-controller-6847bb4785-8l58x

AddedInterface

Add eth0 [10.128.0.29/23] from ovn-kubernetes

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nRevisionControllerDegraded: configmap \"etcd-pod\" not found"
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.33"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("NodeControllerDegraded: All master nodes are ready"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.33"}]

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "controlPlane": map[string]any{"replicas": float64(1)}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "extendedArguments": map[string]any{ +  "cluster-cidr": []any{string("10.128.0.0/16")}, +  "cluster-name": []any{string("sno-qm8m4")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +  }, +  "featureGates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +  string("DisableKubeletCloudCredentialProviders=true"), +  string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +  string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +  string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +  string("MultiArchInstallAWS=true"), ..., +  }, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing
(x2)

openshift-controller-manager

kubelet

controller-manager-6c9b8f4d95-whcm2

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found
(x2)

openshift-controller-manager

kubelet

controller-manager-6c9b8f4d95-whcm2

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" in 3.815s (3.815s including waiting). Image size: 495888162 bytes.

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b" in 3.848s (3.848s including waiting). Image size: 494959854 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-99888bb9b

SuccessfulCreate

Created pod: controller-manager-99888bb9b-v22d2

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6c9b8f4d95 to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-99888bb9b to 1 from 0

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-controller-manager

default-scheduler

controller-manager-99888bb9b-v22d2

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

replicaset-controller

controller-manager-6c9b8f4d95

SuccessfulDelete

Deleted pod: controller-manager-6c9b8f4d95-whcm2

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-576b4d78bd to 1
(x3)

openshift-controller-manager

kubelet

controller-manager-6c9b8f4d95-whcm2

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x3)

openshift-controller-manager

kubelet

controller-manager-6c9b8f4d95-whcm2

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMissing

no observedConfig

openshift-service-ca

replicaset-controller

service-ca-576b4d78bd

SuccessfulCreate

Created pod: service-ca-576b4d78bd-nqcs2

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-service-ca

default-scheduler

service-ca-576b4d78bd-nqcs2

Scheduled

Successfully assigned openshift-service-ca/service-ca-576b4d78bd-nqcs2 to master-0

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-9786ffb6f to 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-9786ffb6f

SuccessfulCreate

Created pod: route-controller-manager-9786ffb6f-5tj2q

openshift-route-controller-manager

default-scheduler

route-controller-manager-9786ffb6f-5tj2q

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q to master-0

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "" to "APIServicesAvailable: endpoints \"api\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: ",Available changed from Unknown to False ("")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "apiServerArguments": map[string]any{ +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  }, +  "projectConfig": map[string]any{"projectRequestMessage": string("")}, +  "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  }, +  "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}},   }

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.33"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-controller-manager

default-scheduler

controller-manager-5b75dfd574-s72zx

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Started

Started container copy-operator-controller-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Created

Created container: copy-operator-controller-manifests
(x5)

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-6f47d587d6-ccrxg_c196f51d-41c5-49a3-a49f-790d492eb779 became leader
(x5)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-v5tpt

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2026-02-24 02:04:04 +0000 UTC AsExpected } {OperatorProgressing False 2026-02-24 02:04:04 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-02-24 02:04:04 +0000 UTC AsExpected }]
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.33"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.33"} {"operator" "4.18.33"}]

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-d7sx4

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-99888bb9b

SuccessfulDelete

Deleted pod: controller-manager-99888bb9b-v22d2

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-controller-manager

default-scheduler

controller-manager-99888bb9b-v22d2

FailedScheduling

skip schedule deleting pod: openshift-controller-manager/controller-manager-99888bb9b-v22d2

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-99888bb9b to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5b75dfd574 to 1 from 0

openshift-service-ca

multus

service-ca-576b4d78bd-nqcs2

AddedInterface

Add eth0 [10.128.0.33/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well")

openshift-controller-manager

replicaset-controller

controller-manager-5b75dfd574

SuccessfulCreate

Created pod: controller-manager-5b75dfd574-s72zx
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-576b4d78bd-nqcs2_640dd99c-7edb-4c1b-a836-3f648821850f became leader

openshift-controller-manager

default-scheduler

controller-manager-5b75dfd574-s72zx

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5b75dfd574-s72zx to master-0

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-8l58x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" in 4.39s (4.39s including waiting). Image size: 463600445 bytes.

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-t5rgn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015" in 4.364s (4.364s including waiting). Image size: 443170136 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.33"

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-cknz9" is created for OpenShiftAuthenticatorCertRequester

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-t5rgn

Started

Started container graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-t5rgn

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-t5rgn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015" already present on machine

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-t5rgn

Started

Started container migrator

openshift-kube-storage-version-migrator

kubelet

migrator-5c85bff57-t5rgn

Created

Created container: migrator

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6847bb4785-8l58x

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6847bb4785-8l58x became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.33"} {"csi-snapshot-controller" "4.18.33"}]
(x3)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.33"
(x3)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.33"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-env-var-controller

etcd-operator

EnvVarControllerUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreateFailed

Failed to create Secret/: secrets "check-endpoints-client-cert-key" already exists

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-cknz9" has been approved

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: secrets \"check-endpoints-client-cert-key\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-CheckEndpointsClient-certrotationcontroller

kube-apiserver-operator

RotationError

secrets "check-endpoints-client-cert-key" already exists
(x3)

openshift-controller-manager

kubelet

controller-manager-5b75dfd574-s72zx

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7" in 3.085s (3.085s including waiting). Image size: 511059399 bytes.

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: secrets \"check-endpoints-client-cert-key\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.33"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-5bd7768f54-7wc6k_e39b4e57-ac61-4cc0-a624-d5e803f0ba9d became leader
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-qm8m4")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}},    "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +  "serviceServingCert": map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +  },    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   }
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-6f47d587d6-ccrxg_6c71ca36-3e17-4896-9821-531c5af39913 became leader

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing
(x2)

openshift-network-operator

kubelet

iptables-alerter-rjbl5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9"

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-network-operator

kubelet

iptables-alerter-rjbl5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" in 433ms (433ms including waiting). Image size: 582052489 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-9786ffb6f-5tj2q

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing
(x6)

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-5g6nc

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-7f665d79f to 1

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-apiserver

replicaset-controller

apiserver-7f665d79f

SuccessfulCreate

Created pod: apiserver-7f665d79f-x624m
(x6)

openshift-multus

kubelet

network-metrics-daemon-tntcf

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace
(x6)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-8cg5c

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x6)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing
(x6)

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "mco-proxy-tls" not found
(x6)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing
(x6)

openshift-multus

kubelet

network-metrics-daemon-tntcf

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-apiserver

default-scheduler

apiserver-7f665d79f-x624m

Scheduled

Successfully assigned openshift-apiserver/apiserver-7f665d79f-x624m to master-0
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing
(x6)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x6)

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-machine-api

multus

cluster-baremetal-operator-d6bb9bb76-k98fq

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-dns-operator

multus

dns-operator-8c7d49845-hxcn2

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n"

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-v5tpt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-machine-api

multus

cluster-baremetal-operator-d6bb9bb76-k98fq

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6"

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3"

openshift-image-registry

multus

cluster-image-registry-operator-779979bdf7-d7sx4

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-d7sx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-bcf775fc9-8x6sd

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2"

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3"

openshift-ingress-operator

multus

ingress-operator-6569778c84-6dlqb

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2"

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-bcf775fc9-8x6sd

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreateFailed

Failed to create ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role: client rate limiter Wait returned an error: context canceled

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-5bd7768f54-7wc6k_506667c4-c56d-4e53-8d80-0159b1e08bb3 became leader

openshift-network-operator

kubelet

iptables-alerter-rjbl5

Created

Created container: iptables-alerter

openshift-network-operator

kubelet

iptables-alerter-rjbl5

Started

Started container iptables-alerter

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing
(x4)

openshift-apiserver

kubelet

apiserver-7f665d79f-x624m

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found

openshift-apiserver

replicaset-controller

apiserver-7f665d79f

SuccessfulDelete

Deleted pod: apiserver-7f665d79f-x624m
(x45)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing
(x4)

openshift-apiserver

kubelet

apiserver-7f665d79f-x624m

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-apiserver

replicaset-controller

apiserver-79dc9447fd

SuccessfulCreate

Created pod: apiserver-79dc9447fd-x64vl

openshift-apiserver

default-scheduler

apiserver-79dc9447fd-x64vl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-79dc9447fd to 1 from 0

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-7f665d79f to 0 from 1
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "admission": map[string]any{ + "pluginConfig": map[string]any{ + "PodSecurity": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, + }, + }, + "apiServerArguments": map[string]any{ + "api-audiences": []any{string("https://kubernetes.default.svc")}, + "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "goaway-chance": []any{string("0")}, + "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, + "send-retry-after-while-not-ready-once": []any{string("true")}, + "service-account-issuer": []any{string("https://kubernetes.default.svc")}, + "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, + "shutdown-delay-duration": []any{string("0s")}, + }, + "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "gracefulTerminationDuration": string("15"), + "servicesSubnet": string("172.30.0.0/16"), + "servingInfo": map[string]any{ + "bindAddress": string("0.0.0.0:6443"), + "bindNetwork": string("tcp4"), + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + "namedCertificates": []any{ + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resou"...), + "keyFile": string("/etc/kubernetes/static-pod-resou"...), + }, + }, + }, }

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver

default-scheduler

apiserver-79dc9447fd-x64vl

Scheduled

Successfully assigned openshift-apiserver/apiserver-79dc9447fd-x64vl to master-0
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing
(x101)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing
(x3)

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-9786ffb6f-5tj2q

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" in 8.015s (8.015s including waiting). Image size: 470717179 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Created

Created container: baremetal-kube-rbac-proxy

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-v5tpt

Started

Started container cluster-version-operator

openshift-controller-manager

replicaset-controller

controller-manager-56767fb5d4

SuccessfulCreate

Created pod: controller-manager-56767fb5d4-2ghfz

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" in 8.015s (8.015s including waiting). Image size: 470717179 bytes.

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3" in 7.91s (7.91s including waiting). Image size: 468159025 bytes.

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

Created

Created container: dns-operator

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

Started

Started container dns-operator

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Started

Started container kube-rbac-proxy

openshift-route-controller-manager

default-scheduler

route-controller-manager-975858db4-g96fv

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Created

Created container: kube-rbac-proxy

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5b75dfd574 to 0 from 1

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-8x6sd_68fa5393-9f47-48db-a636-cbfa1da06bd5

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-8x6sd_68fa5393-9f47-48db-a636-cbfa1da06bd5 became leader

openshift-route-controller-manager

replicaset-controller

route-controller-manager-975858db4

SuccessfulCreate

Created pod: route-controller-manager-975858db4-g96fv

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-d7sx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721" in 8.033s (8.033s including waiting). Image size: 548646306 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Created

Created container: baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Started

Started container baremetal-kube-rbac-proxy

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_43245d9d-8fab-48f3-84bd-d989b38aa119 became leader

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-v5tpt

Created

Created container: cluster-version-operator

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3"

openshift-apiserver

multus

apiserver-79dc9447fd-x64vl

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-8x6sd_68fa5393-9f47-48db-a636-cbfa1da06bd5

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-8x6sd_68fa5393-9f47-48db-a636-cbfa1da06bd5 became leader

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-v5tpt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" in 8.448s (8.448s including waiting). Image size: 517888569 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-route-controller-manager

replicaset-controller

route-controller-manager-9786ffb6f

SuccessfulDelete

Deleted pod: route-controller-manager-9786ffb6f-5tj2q

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" in 8.015s (8.015s including waiting). Image size: 511125422 bytes.

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-779979bdf7-d7sx4_8776ad10-1325-45bc-beda-fc891d069b29 became leader

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Started

Started container baremetal-kube-rbac-proxy

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-56767fb5d4 to 1 from 0

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" in 8.081s (8.081s including waiting). Image size: 677827184 bytes.

openshift-controller-manager

replicaset-controller

controller-manager-5b75dfd574

SuccessfulDelete

Deleted pod: controller-manager-5b75dfd574-s72zx
(x6)

openshift-controller-manager

kubelet

controller-manager-5b75dfd574-s72zx

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-975858db4 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-9786ffb6f to 0 from 1

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.37/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" in 8.081s (8.081s including waiting). Image size: 677827184 bytes.

openshift-cluster-node-tuning-operator

kubelet

tuned-26b2v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine

openshift-route-controller-manager

default-scheduler

route-controller-manager-975858db4-g96fv

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-975858db4-g96fv to master-0

openshift-cluster-node-tuning-operator

kubelet

tuned-26b2v

Started

Started container tuned

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-ingress

default-scheduler

router-default-7b65dc9fcb-22sgl

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

kubelet

tuned-26b2v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine

openshift-cluster-node-tuning-operator

kubelet

tuned-26b2v

Created

Created container: tuned

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-26b2v

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-cluster-node-tuning-operator

default-scheduler

tuned-26b2v

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-26b2v to master-0

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-7b65dc9fcb to 1

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-k98fq_327c49fc-c908-4c4d-af35-4402bc055f16

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-k98fq_327c49fc-c908-4c4d-af35-4402bc055f16 became leader

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

openshift-dns

default-scheduler

node-resolver-4lwwp

Scheduled

Successfully assigned openshift-dns/node-resolver-4lwwp to master-0

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-26b2v

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-dns

default-scheduler

dns-default-5rf6m

Scheduled

Successfully assigned openshift-dns/dns-default-5rf6m to master-0

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

Started

Started container kube-rbac-proxy

openshift-cluster-node-tuning-operator

default-scheduler

tuned-26b2v

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-26b2v to master-0

openshift-dns-operator

kubelet

dns-operator-8c7d49845-hxcn2

Created

Created container: kube-rbac-proxy

openshift-ingress

replicaset-controller

router-default-7b65dc9fcb

SuccessfulCreate

Created pod: router-default-7b65dc9fcb-22sgl

openshift-cluster-node-tuning-operator

kubelet

tuned-26b2v

Started

Started container tuned

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-k98fq_327c49fc-c908-4c4d-af35-4402bc055f16

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-k98fq_327c49fc-c908-4c4d-af35-4402bc055f16 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-4lwwp
(x2)

openshift-controller-manager

default-scheduler

controller-manager-56767fb5d4-2ghfz

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cluster-node-tuning-operator

kubelet

tuned-26b2v

Created

Created container: tuned

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-5rf6m

openshift-dns

kubelet

node-resolver-4lwwp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" already present on machine

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.40/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-dns

kubelet

node-resolver-4lwwp

Started

Started container dns-node-resolver

openshift-dns

kubelet

node-resolver-4lwwp

Created

Created container: dns-node-resolver

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-dns

kubelet

dns-default-5rf6m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd"

openshift-dns

multus

dns-default-5rf6m

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

default-scheduler

controller-manager-56767fb5d4-2ghfz

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-56767fb5d4-2ghfz to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-catalogd

replicaset-controller

catalogd-controller-manager-84b8d9d697

SuccessfulCreate

Created pod: catalogd-controller-manager-84b8d9d697-jhklz

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-catalogd

default-scheduler

catalogd-controller-manager-84b8d9d697-jhklz

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-catalogd

default-scheduler

catalogd-controller-manager-84b8d9d697-jhklz

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz to master-0

openshift-catalogd

replicaset-controller

catalogd-controller-manager-84b8d9d697

SuccessfulCreate

Created pod: catalogd-controller-manager-84b8d9d697-jhklz

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-84b8d9d697 to 1

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-84b8d9d697 to 1

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-9cc7d7bb to 1

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-9cc7d7bb

SuccessfulCreate

Created pod: operator-controller-controller-manager-9cc7d7bb-hvr8b

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml

openshift-operator-controller

default-scheduler

operator-controller-controller-manager-9cc7d7bb-hvr8b

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b to master-0

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-77597cc7cf to 1

openshift-oauth-apiserver

replicaset-controller

apiserver-77597cc7cf

SuccessfulCreate

Created pod: apiserver-77597cc7cf-8j2k2

openshift-oauth-apiserver

default-scheduler

apiserver-77597cc7cf-8j2k2

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2 to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3" in 8.109s (8.109s including waiting). Image size: 589275174 bytes.

openshift-kube-scheduler

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-dns

kubelet

dns-default-5rf6m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd" in 6.627s (6.627s including waiting). Image size: 484074784 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-84b8d9d697-jhklz

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-5rf6m

Started

Started container kube-rbac-proxy

openshift-dns

kubelet

dns-default-5rf6m

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-5rf6m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-dns

kubelet

dns-default-5rf6m

Started

Started container dns

openshift-dns

kubelet

dns-default-5rf6m

Created

Created container: dns

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Started

Started container fix-audit-permissions

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c"

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Started

Started container openshift-apiserver

openshift-monitoring

multus

cluster-monitoring-operator-6bb6d78bf-fkzdb

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-multus

multus

multus-admission-controller-5f98f4f8d5-dg77f

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-multus

multus

network-metrics-daemon-tntcf

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf"

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Created

Created container: fix-audit-permissions

openshift-monitoring

multus

cluster-monitoring-operator-6bb6d78bf-fkzdb

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-multus

multus

multus-admission-controller-5f98f4f8d5-dg77f

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656"

openshift-marketplace

multus

marketplace-operator-6f5488b997-4qf9p

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c"

openshift-multus

kubelet

network-metrics-daemon-tntcf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-operator-lifecycle-manager

multus

package-server-manager-5c75f78c8b-2hllb

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-5g6nc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7"

openshift-operator-lifecycle-manager

multus

olm-operator-5499d7f7bb-5g6nc

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-8cg5c

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7"

openshift-operator-lifecycle-manager

multus

catalog-operator-596f79dd6f-8cg5c

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Started

Started container manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Created

Created container: manager

openshift-operator-controller

multus

operator-controller-controller-manager-9cc7d7bb-hvr8b

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

Started

Started container machine-config-operator

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

Created

Created container: machine-config-operator

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine

openshift-machine-config-operator

multus

machine-config-operator-7f8c75f984-ffnq7

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-84b8d9d697-jhklz

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-multus

multus

network-metrics-daemon-tntcf

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-tntcf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de"

openshift-oauth-apiserver

kubelet

apiserver-77597cc7cf-8j2k2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1"

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-oauth-apiserver

multus

apiserver-77597cc7cf-8j2k2

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Started

Started container kube-rbac-proxy

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.33} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf}]

openshift-catalogd

catalogd-controller-manager-84b8d9d697-jhklz_9251a783-3ef2-4b70-98cb-6e342ce8223a

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-jhklz_9251a783-3ef2-4b70-98cb-6e342ce8223a became leader

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Created

Created container: manager

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Created

Created container: openshift-apiserver-check-endpoints

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-79dc9447fd-x64vl

Started

Started container openshift-apiserver-check-endpoints

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Started

Started container manager

openshift-operator-controller

operator-controller-controller-manager-9cc7d7bb-hvr8b_bd73e0e9-3650-46c7-9ace-b2258bc4bd25

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-9cc7d7bb-hvr8b_bd73e0e9-3650-46c7-9ace-b2258bc4bd25 became leader

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Created

Created container: manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-operator-7f8c75f984-ffnq7

Started

Started container kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Started

Started container kube-rbac-proxy

openshift-catalogd

catalogd-controller-manager-84b8d9d697-jhklz_9251a783-3ef2-4b70-98cb-6e342ce8223a

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-jhklz_9251a783-3ef2-4b70-98cb-6e342ce8223a became leader

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Started

Started container manager

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RenderConfigFailed

Unable to apply 4.18.33: configmap "machine-config-osimageurl" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed"

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" in 5.238s (5.238s including waiting). Image size: 458025547 bytes.

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" in 5.301s (5.301s including waiting). Image size: 456470711 bytes.

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" in 5.301s (5.301s including waiting). Image size: 456470711 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" in 5.232s (5.232s including waiting). Image size: 484349508 bytes.
(x68)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-multus

kubelet

network-metrics-daemon-tntcf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" in 5.299s (5.299s including waiting). Image size: 448723134 bytes.

openshift-oauth-apiserver

kubelet

apiserver-77597cc7cf-8j2k2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1" in 5.693s (5.693s including waiting). Image size: 505244089 bytes.

openshift-multus

kubelet

network-metrics-daemon-tntcf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" in 5.299s (5.299s including waiting). Image size: 448723134 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" in 5.232s (5.232s including waiting). Image size: 484349508 bytes.

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-kjcrm" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-75d56db95f-9gkp2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

kubelet

network-metrics-daemon-tntcf

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-tntcf

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-tntcf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

network-metrics-daemon-tntcf

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-tntcf

Created

Created container: network-metrics-daemon

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-5cfd9759cf to 0 from 1

openshift-cluster-version

replicaset-controller

cluster-version-operator-5cfd9759cf

SuccessfulDelete

Deleted pod: cluster-version-operator-5cfd9759cf-v5tpt

openshift-cluster-version

kubelet

cluster-version-operator-5cfd9759cf-v5tpt

Killing

Stopping container cluster-version-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:52681->172.30.0.10:53: read: connection refused" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:37085->172.30.0.10:53: read: connection refused" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:52681->172.30.0.10:53: read: connection refused"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:37085->172.30.0.10:53: read: connection refused" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:37085->172.30.0.10:53: read: connection refused",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:37085->172.30.0.10:53: read: connection refused"

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-mhkn7" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-kjcrm" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-mhkn7" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x59)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-oauth-apiserver

kubelet

apiserver-77597cc7cf-8j2k2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-77597cc7cf-8j2k2

Started

Started container fix-audit-permissions

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-oauth-apiserver

kubelet

apiserver-77597cc7cf-8j2k2

Created

Created container: fix-audit-permissions

openshift-multus

kubelet

network-metrics-daemon-tntcf

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-tntcf

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-tntcf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

network-metrics-daemon-tntcf

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-tntcf

Created

Created container: network-metrics-daemon

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Created

Created container: multus-admission-controller

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-mhkn7" has been approved

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Created

Created container: cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Started

Started container cluster-monitoring-operator

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-kjcrm" has been approved

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Started

Started container marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Created

Created container: marketplace-operator

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-75d56db95f to 1

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-75d56db95f

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-75d56db95f-9gkp2

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Started

Started container cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-6bb6d78bf-fkzdb

Created

Created container: cluster-monitoring-operator

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-75d56db95f to 1

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-75d56db95f

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-75d56db95f-9gkp2

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-75d56db95f-9gkp2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-57476485 to 1

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.33"}] to [{"operator" "4.18.33"} {"openshift-apiserver" "4.18.33"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.33"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-cluster-version

default-scheduler

cluster-version-operator-57476485-9cjj5

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-57476485-9cjj5 to master-0

openshift-cluster-version

replicaset-controller

cluster-version-operator-57476485

SuccessfulCreate

Created pod: cluster-version-operator-57476485-9cjj5

openshift-oauth-apiserver

kubelet

apiserver-77597cc7cf-8j2k2

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-77597cc7cf-8j2k2

Started

Started container oauth-apiserver

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-975858db4-g96fv

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-57df7db547 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-56cd46585c to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-975858db4 to 0 from 1
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-56767fb5d4 to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-975858db4

SuccessfulDelete

Deleted pod: route-controller-manager-975858db4-g96fv

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/template.openshift.io/v1: 401"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-controller-manager

replicaset-controller

controller-manager-57df7db547

SuccessfulCreate

Created pod: controller-manager-57df7db547-2v9c5

openshift-controller-manager

replicaset-controller

controller-manager-56767fb5d4

SuccessfulDelete

Deleted pod: controller-manager-56767fb5d4-2ghfz
(x6)

openshift-controller-manager

kubelet

controller-manager-56767fb5d4-2ghfz

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-8cg5c

Created

Created container: catalog-operator

openshift-route-controller-manager

replicaset-controller

route-controller-manager-56cd46585c

SuccessfulCreate

Created pod: route-controller-manager-56cd46585c-nhkd9

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-5g6nc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" in 10.417s (10.417s including waiting). Image size: 862501144 bytes.

openshift-operator-lifecycle-manager

package-server-manager-5c75f78c8b-2hllb_8050b3bb-f8f6-44d9-afc8-ab9f6cebb44b

packageserver-controller-lock

LeaderElection

package-server-manager-5c75f78c8b-2hllb_8050b3bb-f8f6-44d9-afc8-ab9f6cebb44b became leader

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-5g6nc

Created

Created container: olm-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-5499d7f7bb-5g6nc

Started

Started container olm-operator

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-8cg5c

Started

Started container catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-596f79dd6f-8cg5c

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" in 10.396s (10.396s including waiting). Image size: 862501144 bytes.

openshift-kube-scheduler

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-3-master-0

Started

Started container installer

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_c50690ca-6522-48a3-a9d3-7c498bbf22bf became leader

openshift-cluster-version

kubelet

cluster-version-operator-57476485-9cjj5

Started

Started container cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-57476485-9cjj5

Created

Created container: cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-57476485-9cjj5

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" already present on machine

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" in 10.173s (10.173s including waiting). Image size: 862501144 bytes.

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

Created

Created container: package-server-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c75f78c8b-2hllb

Started

Started container package-server-manager

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer
(x2)

openshift-controller-manager

default-scheduler

controller-manager-57df7db547-2v9c5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

requirements not yet checked

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-marketplace

default-scheduler

certified-operators-dwmm5

Scheduled

Successfully assigned openshift-marketplace/certified-operators-dwmm5 to master-0
(x2)

openshift-route-controller-manager

default-scheduler

route-controller-manager-56cd46585c-nhkd9

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-marketplace

default-scheduler

community-operators-rvp5j

Scheduled

Successfully assigned openshift-marketplace/community-operators-rvp5j to master-0

openshift-marketplace

kubelet

certified-operators-dwmm5

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-597975fc65

SuccessfulCreate

Created pod: packageserver-597975fc65-xcl6c

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

community-operators-rvp5j

Started

Started container extract-utilities

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-rvp5j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

default-scheduler

redhat-marketplace-hrmdr

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-hrmdr to master-0

openshift-marketplace

kubelet

certified-operators-dwmm5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-marketplace

kubelet

community-operators-rvp5j

Created

Created container: extract-utilities

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-operator-lifecycle-manager

default-scheduler

packageserver-597975fc65-xcl6c

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c to master-0

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-marketplace

kubelet

community-operators-rvp5j

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33"

openshift-marketplace

multus

certified-operators-dwmm5

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-dwmm5

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-dwmm5

Started

Started container extract-utilities

openshift-marketplace

multus

community-operators-rvp5j

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-597975fc65 to 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.33"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.33"}] to [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"}]

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64"

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-controller-manager

default-scheduler

controller-manager-57df7db547-2v9c5

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-57df7db547-2v9c5 to master-0

openshift-marketplace

multus

redhat-marketplace-hrmdr

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

multus

packageserver-597975fc65-xcl6c

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-route-controller-manager

default-scheduler

route-controller-manager-56cd46585c-nhkd9

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9 to master-0

openshift-operator-lifecycle-manager

kubelet

packageserver-597975fc65-xcl6c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Started

Started container extract-utilities

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

kubelet

packageserver-597975fc65-xcl6c

Started

Started container packageserver

openshift-operator-lifecycle-manager

kubelet

packageserver-597975fc65-xcl6c

Created

Created container: packageserver

openshift-marketplace

multus

redhat-operators-g862w

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74"

openshift-route-controller-manager

multus

route-controller-manager-56cd46585c-nhkd9

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-56cd46585c-nhkd9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655"

openshift-marketplace

default-scheduler

redhat-operators-g862w

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-g862w to master-0

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

multus

controller-manager-57df7db547-2v9c5

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

redhat-operators-g862w

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-g862w

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

redhat-operators-g862w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

redhat-operators-g862w

Created

Created container: extract-utilities

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well"

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" in 6.894s (6.894s including waiting). Image size: 558105176 bytes.

openshift-kube-controller-manager

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-route-controller-manager

kubelet

route-controller-manager-56cd46585c-nhkd9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" in 6.845s (6.845s including waiting). Image size: 486990304 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-56cd46585c-nhkd9

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-56cd46585c-nhkd9

Started

Started container route-controller-manager
(x23)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Started

Started container controller-manager

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-57df7db547-2v9c5 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-686847ff5f to 1

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-686847ff5f

SuccessfulCreate

Created pod: control-plane-machine-set-operator-686847ff5f-ckntz

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-686847ff5f to 1

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-686847ff5f

SuccessfulCreate

Created pod: control-plane-machine-set-operator-686847ff5f-ckntz

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-686847ff5f-ckntz

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz to master-0

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-56cd46585c-nhkd9_fd42e190-3b01-4e91-98f7-b570a7bec0d4 became leader

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-686847ff5f-ckntz

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz to master-0

openshift-etcd

kubelet

etcd-master-0-master-0

Killing

Stopping container etcdctl

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac"

openshift-machine-api

multus

control-plane-machine-set-operator-686847ff5f-ckntz

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac"

openshift-machine-api

multus

control-plane-machine-set-operator-686847ff5f-ckntz

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-g862w

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-g862w

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-rvp5j

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-dwmm5

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-dwmm5

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-dwmm5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

community-operators-rvp5j

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-g862w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

community-operators-rvp5j

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

certified-operators-dwmm5

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 424ms (424ms including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

certified-operators-dwmm5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 432ms (432ms including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

certified-operators-dwmm5

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-g862w

Started

Started container registry-server

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-marketplace

kubelet

redhat-operators-g862w

Created

Created container: registry-server

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-marketplace

kubelet

community-operators-rvp5j

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-g862w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 517ms (517ms including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

community-operators-rvp5j

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 473ms (473ms including waiting). Image size: 918153745 bytes.

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-marketplace

kubelet

community-operators-rvp5j

Created

Created container: registry-server

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-marketplace

kubelet

redhat-operators-g862w

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted
(x3)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Unhealthy

Liveness probe failed: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused
(x3)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

ProbeError

Liveness probe error: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused body:
(x5)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Unhealthy

Readiness probe failed: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused
(x5)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

ProbeError

Readiness probe error: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused body:
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Unhealthy

Liveness probe failed: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

ProbeError

Liveness probe error: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused body:
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

ProbeError

Liveness probe error: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused body:
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

ProbeError

Liveness probe error: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused body:
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Unhealthy

Liveness probe failed: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Unhealthy

Liveness probe failed: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused
(x6)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Unhealthy

Readiness probe failed: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused
(x6)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Unhealthy

Readiness probe failed: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused
(x6)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Unhealthy

Readiness probe failed: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

ProbeError

Readiness probe error: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused body:
(x7)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

ProbeError

Readiness probe error: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused body:
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

ProbeError

Readiness probe error: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused body:
(x3)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

ProbeError

Liveness probe error: Get "https://10.128.0.14:8443/healthz": dial tcp 10.128.0.14:8443: connect: connection refused body:
(x3)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

Unhealthy

Liveness probe failed: Get "https://10.128.0.14:8443/healthz": dial tcp 10.128.0.14:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Unhealthy

Liveness probe failed: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

ProbeError

Liveness probe error: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused body:

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz
(x7)

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

ProbeError

Readiness probe error: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused body:
(x7)

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Unhealthy

Readiness probe failed: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused
(x2)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-2492q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-tl97n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8tttg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine
(x2)

openshift-network-operator

kubelet

network-operator-7d7db75979-drrqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-xdws2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" already present on machine

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-c7fgn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" already present on machine

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-6fqkr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" already present on machine
(x2)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

Started

Started container etcd-operator
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-6fqkr

Created

Created container: service-ca-operator
(x3)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8tttg

Started

Started container kube-scheduler-operator-container
(x3)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-77cd4d9559-8tttg

Created

Created container: kube-scheduler-operator-container
(x2)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-2492q

Created

Created container: kube-apiserver-operator
(x2)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5d87bf58c-2492q

Started

Started container kube-apiserver-operator
(x3)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-tl97n

Started

Started container kube-controller-manager-operator
(x3)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-7bcfbc574b-tl97n

Created

Created container: kube-controller-manager-operator
(x2)

openshift-network-operator

kubelet

network-operator-7d7db75979-drrqm

Started

Started container network-operator
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-c48c8bf7c-6fqkr

Started

Started container service-ca-operator
(x3)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-xdws2

Created

Created container: kube-storage-version-migrator-operator
(x2)

openshift-network-operator

kubelet

network-operator-7d7db75979-drrqm

Created

Created container: network-operator
(x2)

openshift-etcd-operator

kubelet

etcd-operator-545bf96f4d-jb9vb

Created

Created container: etcd-operator
(x3)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-fc889cfd5-xdws2

Started

Started container kube-storage-version-migrator-operator
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-c7fgn

Started

Started container openshift-controller-manager-operator
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-584cc7bcb5-c7fgn

Created

Created container: openshift-controller-manager-operator

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-57df7db547-2v9c5 became leader

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-ckntz_c2eb175a-d9a1-4a74-ab83-1f41bf91cd70

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-ckntz_c2eb175a-d9a1-4a74-ab83-1f41bf91cd70 became leader

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5d8dfcdc87-bb22k became leader

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-ckntz_c2eb175a-d9a1-4a74-ab83-1f41bf91cd70

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-ckntz_c2eb175a-d9a1-4a74-ab83-1f41bf91cd70 became leader

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-d7sx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721" already present on machine
(x2)

openshift-service-ca

kubelet

service-ca-576b4d78bd-nqcs2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

BackOff

Back-off restarting failed container cluster-olm-operator in pod cluster-olm-operator-5bd7768f54-7wc6k_openshift-cluster-olm-operator(303d5058-84df-40d1-a941-896b093ae470)
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-c95qc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" already present on machine

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine
(x2)

openshift-service-ca

kubelet

service-ca-576b4d78bd-nqcs2

Started

Started container service-ca-controller

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: cluster-policy-controller
(x2)

openshift-service-ca

kubelet

service-ca-576b4d78bd-nqcs2

Created

Created container: service-ca-controller
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Created

Created container: cluster-node-tuning-operator

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": context deadline exceeded

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container cluster-policy-controller
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-d7sx4

Created

Created container: cluster-image-registry-operator
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-c95qc

Started

Started container csi-snapshot-controller-operator
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6fb4df594f-c95qc

Created

Created container: csi-snapshot-controller-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Started

Started container cluster-node-tuning-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Started

Started container cluster-node-tuning-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bcf775fc9-8x6sd

Created

Created container: cluster-node-tuning-operator
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-779979bdf7-d7sx4

Started

Started container cluster-image-registry-operator
(x3)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

BackOff

Back-off restarting failed container openshift-config-operator in pod openshift-config-operator-6f47d587d6-ccrxg_openshift-config-operator(c92835f0-7f32-4584-8304-843d7979392a)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_737bae04-b6ce-45d7-983a-7b919b204569 became leader
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7" already present on machine
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Started

Started container cluster-olm-operator
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-5bd7768f54-7wc6k

Created

Created container: cluster-olm-operator
(x4)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Created

Created container: authentication-operator
(x3)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" already present on machine

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Killing

Container authentication-operator failed liveness probe, will be restarted
(x6)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Unhealthy

Liveness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused
(x6)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

ProbeError

Liveness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body:
(x4)

openshift-authentication-operator

kubelet

authentication-operator-5bd7c86784-46vmq

Started

Started container authentication-operator
(x2)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" already present on machine
(x3)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Created

Created container: openshift-config-operator
(x3)

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Started

Started container openshift-config-operator

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_b318888e-bd7c-4f28-b369-d7c09cfa4a9d became leader

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

ProbeError

Liveness probe error: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused body:

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Unhealthy

Liveness probe failed: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

ProbeError

Readiness probe error: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused body:

openshift-config-operator

kubelet

openshift-config-operator-6f47d587d6-ccrxg

Unhealthy

Readiness probe failed: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-5bd7768f54-7wc6k_6c8fbbdd-a330-47db-b45e-813cff14d19b became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-8x6sd_04423564-d012-45a4-84ef-fa9851d3920f

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-8x6sd_04423564-d012-45a4-84ef-fa9851d3920f became leader

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-779979bdf7-d7sx4_32d2ff87-fe5b-415f-9946-f64ec841045f became leader

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-6fb4df594f-c95qc_6a9817f7-0f7e-4bc1-ab6a-6daee0e29254 became leader

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-6f47d587d6-ccrxg_3f46327c-d05e-4186-9463-32aa70584c8d became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bcf775fc9-8x6sd_04423564-d012-45a4-84ef-fa9851d3920f

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bcf775fc9-8x6sd_04423564-d012-45a4-84ef-fa9851d3920f became leader

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-576b4d78bd-nqcs2_d5f1af4c-b4aa-455a-8ce3-7dab851b82c7 became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_81195f95-6dee-4590-b64f-da92867f2c51 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-6968c58f46

SuccessfulCreate

Created pod: cloud-credential-operator-6968c58f46-fcr59

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-6968c58f46 to 1
(x3)

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

FailedMount

MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found

openshift-cloud-credential-operator

multus

cloud-credential-operator-6968c58f46-fcr59

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed"

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

Started

Started container kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

Created

Created container: kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-hrmdr

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-g862w

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Started

Started container extract-utilities

openshift-marketplace

multus

redhat-operators-4znnj

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-4znnj

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-4znnj

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-4znnj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

multus

redhat-marketplace-qqt7p

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"

openshift-marketplace

kubelet

certified-operators-dwmm5

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-4znnj

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 719ms (719ms including waiting). Image size: 1202767548 bytes.

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-marketplace

kubelet

community-operators-rvp5j

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-4znnj

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 748ms (748ms including waiting). Image size: 1703852494 bytes.

openshift-marketplace

kubelet

redhat-operators-4znnj

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-4znnj

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-kkwwl

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-marketplace

kubelet

community-operators-kkwwl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

community-operators-kkwwl

Started

Started container extract-utilities

openshift-marketplace

multus

certified-operators-brpmb

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-brpmb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

certified-operators-brpmb

Created

Created container: extract-utilities

openshift-marketplace

multus

community-operators-kkwwl

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-brpmb

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-4znnj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-798b897698 to 1

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"

openshift-cluster-machine-approver

replicaset-controller

machine-approver-798b897698

SuccessfulCreate

Created pod: machine-approver-798b897698-rqrlc

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Created

Created container: registry-server

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

Started

Started container cloud-credential-operator

openshift-marketplace

kubelet

certified-operators-brpmb

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-4znnj

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-4znnj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 3.253s (3.253s including waiting). Image size: 918153745 bytes.

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed" in 7.654s (7.654s including waiting). Image size: 880247193 bytes.

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

Created

Created container: cloud-credential-operator

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 3.197s (3.197s including waiting). Image size: 918153745 bytes.

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-4znnj

Started

Started container registry-server

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-65c5c48b9b to 1

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

community-operators-kkwwl

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-65c5c48b9b

SuccessfulCreate

Created pod: cluster-samples-operator-65c5c48b9b-bkc9s

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa"

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-86b8dc6d6 to 1

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-86b8dc6d6

SuccessfulCreate

Created pod: cluster-autoscaler-operator-86b8dc6d6-mtrdk

openshift-marketplace

kubelet

certified-operators-brpmb

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 650ms (650ms including waiting). Image size: 1238591178 bytes.

openshift-marketplace

kubelet

certified-operators-brpmb

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-brpmb

Started

Started container extract-content

openshift-cluster-samples-operator

multus

cluster-samples-operator-65c5c48b9b-bkc9s

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-86b8dc6d6

SuccessfulCreate

Created pod: cluster-autoscaler-operator-86b8dc6d6-mtrdk

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-86b8dc6d6 to 1

openshift-marketplace

kubelet

community-operators-kkwwl

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-kkwwl

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-kkwwl

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 715ms (715ms including waiting). Image size: 1210563790 bytes.

openshift-marketplace

kubelet

certified-operators-brpmb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-59b498fcfb to 1

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa" in 2.037s (2.037s including waiting). Image size: 467133839 bytes.

openshift-insights

replicaset-controller

insights-operator-59b498fcfb

SuccessfulCreate

Created pod: insights-operator-59b498fcfb-dbkwd

openshift-marketplace

kubelet

community-operators-kkwwl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7"

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-f94476f49 to 1

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-f94476f49

SuccessfulCreate

Created pod: cluster-storage-operator-f94476f49-c5wlk

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6"

openshift-machine-api

multus

cluster-autoscaler-operator-86b8dc6d6-mtrdk

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-brpmb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 374ms (374ms including waiting). Image size: 918153745 bytes.

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6"

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-c5wlk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75"

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c"

openshift-insights

multus

insights-operator-59b498fcfb-dbkwd

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-machine-api

multus

cluster-autoscaler-operator-86b8dc6d6-mtrdk

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-marketplace

kubelet

community-operators-kkwwl

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-kkwwl

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-kkwwl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 389ms (389ms including waiting). Image size: 918153745 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-cluster-machine-approver

master-0_8ed46aa3-b0da-42be-84d6-6fda52a24b08

cluster-machine-approver-leader

LeaderElection

master-0_8ed46aa3-b0da-42be-84d6-6fda52a24b08 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Started

Started container machine-approver-controller

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6"

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-marketplace

kubelet

certified-operators-brpmb

Started

Started container registry-server

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Created

Created container: machine-approver-controller

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-marketplace

kubelet

certified-operators-brpmb

Created

Created container: registry-server

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-cluster-storage-operator

multus

cluster-storage-operator-f94476f49-c5wlk

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-cbd75ff8d to 1

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-cbd75ff8d

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

openshift-machine-api

replicaset-controller

machine-api-operator-5c7cf458b4

SuccessfulCreate

Created pod: machine-api-operator-5c7cf458b4-dsjgm

openshift-machine-api

replicaset-controller

machine-api-operator-5c7cf458b4

SuccessfulCreate

Created pod: machine-api-operator-5c7cf458b4-dsjgm

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-5c7cf458b4 to 1

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-5c7cf458b4 to 1

openshift-marketplace

kubelet

redhat-operators-4znnj

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" in 3.396s (3.396s including waiting). Image size: 456273550 bytes.

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-hfpql

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" in 3.396s (3.396s including waiting). Image size: 456273550 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e"

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6" in 4.914s (4.914s including waiting). Image size: 455311777 bytes.

openshift-machine-api

multus

machine-api-operator-5c7cf458b4-dsjgm

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-c5wlk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75" in 4.619s (4.619s including waiting). Image size: 513473308 bytes.

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine

openshift-machine-api

multus

machine-api-operator-5c7cf458b4-dsjgm

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

Created

Created container: machine-config-daemon

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Created

Created container: kube-rbac-proxy

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c" in 4.736s (4.736s including waiting). Image size: 504558291 bytes.

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-c5wlk

Created

Created container: cluster-storage-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

Started

Started container cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

Created

Created container: cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

Started

Started container cluster-samples-operator

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Created

Created container: kube-rbac-proxy

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-c5wlk

Started

Started container cluster-storage-operator

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-f94476f49-c5wlk_842b1853-3c64-4f4f-acfa-4a73b2b0efb7 became leader

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mtrdk_ce9bbad5-3d0c-452a-9f50-5efcf2544f9e

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-86b8dc6d6-mtrdk_ce9bbad5-3d0c-452a-9f50-5efcf2544f9e became leader

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Started

Started container cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Created

Created container: cluster-autoscaler-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

Created

Created container: cluster-samples-operator

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34"

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

cluster-autoscaler-operator-86b8dc6d6-mtrdk_ce9bbad5-3d0c-452a-9f50-5efcf2544f9e

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-86b8dc6d6-mtrdk_ce9bbad5-3d0c-452a-9f50-5efcf2544f9e became leader

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

Started

Started container machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Started

Started container cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

Created

Created container: cluster-autoscaler-operator

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34"

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Started

Started container kube-rbac-proxy

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.33"

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-54cb48566c to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

replicaset-controller

machine-config-controller-54cb48566c

SuccessfulCreate

Created pod: machine-config-controller-54cb48566c-xzpl4

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" in 7.853s (7.853s including waiting). Image size: 557320737 bytes.

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c" already present on machine
(x2)

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

Started

Started container insights-operator
(x2)

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

Created

Created container: insights-operator

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

master-0_9ed6d9b2-f433-40f6-8fe5-afcb27f63175

cluster-cloud-config-sync-leader

LeaderElection

master-0_9ed6d9b2-f433-40f6-8fe5-afcb27f63175 became leader

openshift-cloud-controller-manager-operator

master-0_ae0cb5dd-bcec-459d-baa3-1ab9bccd384f

cluster-cloud-controller-manager-leader

LeaderElection

master-0_ae0cb5dd-bcec-459d-baa3-1ab9bccd384f became leader

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-xzpl4

Created

Created container: kube-rbac-proxy

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Started

Started container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Started

Started container cluster-cloud-controller-manager

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

multus

machine-config-controller-54cb48566c-xzpl4

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-xzpl4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-xzpl4

Created

Created container: machine-config-controller

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-xzpl4

Started

Started container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Started

Started container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-xzpl4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-xzpl4

Started

Started container machine-config-controller

openshift-monitoring

multus

prometheus-operator-admission-webhook-75d56db95f-9gkp2

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-monitoring

multus

prometheus-operator-admission-webhook-75d56db95f-9gkp2

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143"

openshift-network-diagnostics

multus

network-check-source-58fb6744f5-l4wh6

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6847bb4785-8l58x

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6847bb4785-8l58x became leader

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531640-kptmw

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531640-kptmw

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531640-kptmw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29531640-kptmw

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Killing

Stopping container machine-approver-controller

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-798b897698 to 0 from 1

openshift-cluster-machine-approver

kubelet

machine-approver-798b897698-rqrlc

Killing

Stopping container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-cluster-machine-approver

replicaset-controller

machine-approver-798b897698

SuccessfulDelete

Deleted pod: machine-approver-798b897698-rqrlc

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-drf28

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350"

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350"

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=missing MachineConfig rendered-master-493a2e7ba4d367d6ccee941846bd8ced machineconfig.machineconfiguration.openshift.io "rendered-master-493a2e7ba4d367d6ccee941846bd8ced" not found

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-493a2e7ba4d367d6ccee941846bd8ced successfully generated (release version: 4.18.33, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-493a2e7ba4d367d6ccee941846bd8ced

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-7dd9c7d7b9 to 1

openshift-cluster-machine-approver

replicaset-controller

machine-approver-7dd9c7d7b9

SuccessfulCreate

Created pod: machine-approver-7dd9c7d7b9-sjqsx

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-63df4eee2110de1e37432310f6e83f1d successfully generated (release version: 4.18.33, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-network-diagnostics

kubelet

network-check-source-58fb6744f5-l4wh6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Degraded

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-493a2e7ba4d367d6ccee941846bd8ced

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Started

Started container prometheus-operator-admission-webhook

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Started

Started container machine-api-operator

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Created

Created container: prometheus-operator-admission-webhook

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Started

Started container prometheus-operator-admission-webhook

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531640, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531640

Completed

Job completed

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" in 12.782s (12.782s including waiting). Image size: 862091954 bytes.

openshift-machine-config-operator

kubelet

machine-config-server-drf28

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine

openshift-machine-config-operator

kubelet

machine-config-server-drf28

Created

Created container: machine-config-server

openshift-machine-config-operator

kubelet

machine-config-server-drf28

Started

Started container machine-config-server

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" in 1.705s (1.705s including waiting). Image size: 444471741 bytes.

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" in 1.705s (1.705s including waiting). Image size: 444471741 bytes.

openshift-network-diagnostics

kubelet

network-check-source-58fb6744f5-l4wh6

Started

Started container check-endpoints

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

Created

Created container: prometheus-operator-admission-webhook

openshift-network-diagnostics

kubelet

network-check-source-58fb6744f5-l4wh6

Created

Created container: check-endpoints

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.33

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" in 12.782s (12.782s including waiting). Image size: 862091954 bytes.

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

Started

Started container machine-api-operator

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

Started

Started container kube-rbac-proxy

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143" in 5.107s (5.108s including waiting). Image size: 487054953 bytes.

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

Created

Created container: router

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

Started

Started container router

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-754bc4d665 to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

replicaset-controller

prometheus-operator-754bc4d665

SuccessfulCreate

Created pod: prometheus-operator-754bc4d665-66lml

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-754bc4d665 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-cluster-machine-approver

master-0_16879520-47d0-4475-ae17-bfd0ea98490a

cluster-machine-approver-leader

LeaderElection

master-0_16879520-47d0-4475-ae17-bfd0ea98490a became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RequiredPoolsFailed

Unable to apply 4.18.33: error during syncRequiredMachineConfigPools: context deadline exceeded

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

prometheus-operator-754bc4d665

SuccessfulCreate

Created pod: prometheus-operator-754bc4d665-66lml

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285"

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-cbd75ff8d to 0 from 1

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285"

openshift-monitoring

multus

prometheus-operator-754bc4d665-66lml

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-cbd75ff8d

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Killing

Stopping container config-sync-controllers

openshift-monitoring

multus

prometheus-operator-754bc4d665-66lml

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Killing

Stopping container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw

Killing

Stopping container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-67dd8d7969

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-67dd8d7969 to 1

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Started

Started container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" in 3.84s (3.84s including waiting). Image size: 461468192 bytes.

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" in 3.84s (3.84s including waiting). Image size: 461468192 bytes.

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

Started

Started container kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-59584d565f to 1

openshift-monitoring

replicaset-controller

kube-state-metrics-59584d565f

SuccessfulCreate

Created pod: kube-state-metrics-59584d565f-f6f26

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-2qn8m

openshift-monitoring

replicaset-controller

openshift-state-metrics-6dbff8cb4c

SuccessfulCreate

Created pod: openshift-state-metrics-6dbff8cb4c-swtr6

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-6dbff8cb4c to 1

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-6dbff8cb4c to 1

openshift-monitoring

replicaset-controller

openshift-state-metrics-6dbff8cb4c

SuccessfulCreate

Created pod: openshift-state-metrics-6dbff8cb4c-swtr6

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-2qn8m

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-59584d565f to 1

openshift-monitoring

replicaset-controller

kube-state-metrics-59584d565f

SuccessfulCreate

Created pod: kube-state-metrics-59584d565f-f6f26

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

multus

openshift-state-metrics-6dbff8cb4c-swtr6

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing
(x10)

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

multus

openshift-state-metrics-6dbff8cb4c-swtr6

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d"

openshift-monitoring

multus

kube-state-metrics-59584d565f-f6f26

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d"

openshift-monitoring

multus

kube-state-metrics-59584d565f-f6f26

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" in 1.153s (1.153s including waiting). Image size: 417586222 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708"

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

node-exporter-2qn8m

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" in 1.153s (1.153s including waiting). Image size: 417586222 bytes.

openshift-monitoring

kubelet

node-exporter-2qn8m

Created

Created container: init-textfile

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-2qn8m

Started

Started container init-textfile

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-2qn8m

Created

Created container: init-textfile

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" in 1.897s (1.897s including waiting). Image size: 440450463 bytes.

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" in 1.897s (1.897s including waiting). Image size: 440450463 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" in 1.578s (1.578s including waiting). Image size: 431873347 bytes.

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" already present on machine

openshift-monitoring

kubelet

node-exporter-2qn8m

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-2qn8m

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-2qn8m

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-2qn8m

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-2qn8m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" in 1.578s (1.578s including waiting). Image size: 431873347 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

node-exporter-2qn8m

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-2qn8m

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-2qn8m

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Started

Started container kube-rbac-proxy-main

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-2qn8m

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

Started

Started container kube-rbac-proxy-self

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7b9cc5984b to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-n76llk2nkkst -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

metrics-server-7b9cc5984b

SuccessfulCreate

Created pod: metrics-server-7b9cc5984b-smpdl

openshift-monitoring

multus

metrics-server-7b9cc5984b-smpdl

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-n76llk2nkkst -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7b9cc5984b to 1

openshift-monitoring

replicaset-controller

metrics-server-7b9cc5984b

SuccessfulCreate

Created pod: metrics-server-7b9cc5984b-smpdl

openshift-monitoring

multus

metrics-server-7b9cc5984b-smpdl

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb"

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb"

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" in 1.759s (1.759s including waiting). Image size: 471325816 bytes.

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" in 1.759s (1.759s including waiting). Image size: 471325816 bytes.

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Created

Created container: metrics-server

openshift-network-node-identity

master-0_a22bb25e-608b-4274-9e33-a638b37102a0

ovnkube-identity

LeaderElection

master-0_a22bb25e-608b-4274-9e33-a638b37102a0 became leader

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing
(x2)

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-493a2e7ba4d367d6ccee941846bd8ced and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-493a2e7ba4d367d6ccee941846bd8ced to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-493a2e7ba4d367d6ccee941846bd8ced

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.33} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf}]

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-catalogd

catalogd-controller-manager-84b8d9d697-jhklz_6f4d5d9b-4689-4039-905b-270455ffa907

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-jhklz_6f4d5d9b-4689-4039-905b-270455ffa907 became leader

openshift-catalogd

catalogd-controller-manager-84b8d9d697-jhklz_6f4d5d9b-4689-4039-905b-270455ffa907

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-jhklz_6f4d5d9b-4689-4039-905b-270455ffa907 became leader

openshift-operator-controller

operator-controller-controller-manager-9cc7d7bb-hvr8b_9593eb58-e1a6-437d-884c-8abfe04f8f9b

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-9cc7d7bb-hvr8b_9593eb58-e1a6-437d-884c-8abfe04f8f9b became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-k98fq_c03565c0-22a5-4c9f-ad5d-b15a70d7752f

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-k98fq_c03565c0-22a5-4c9f-ad5d-b15a70d7752f became leader

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-k98fq_c03565c0-22a5-4c9f-ad5d-b15a70d7752f

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-k98fq_c03565c0-22a5-4c9f-ad5d-b15a70d7752f became leader

openshift-cloud-controller-manager-operator

master-0_4931bee4-c965-4dbd-8cda-0121543764c5

cluster-cloud-config-sync-leader

LeaderElection

master-0_4931bee4-c965-4dbd-8cda-0121543764c5 became leader

openshift-cloud-controller-manager-operator

master-0_e6449d2c-59c8-4df3-8f4a-6cb9531b02fb

cluster-cloud-controller-manager-leader

LeaderElection

master-0_e6449d2c-59c8-4df3-8f4a-6cb9531b02fb became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-jjpsc

openshift-ingress-canary

kubelet

ingress-canary-jjpsc

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found

openshift-ingress-canary

multus

ingress-canary-jjpsc

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-jjpsc

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-jjpsc

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-jjpsc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced")

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-545bf96f4d-jb9vb_535ac6dd-846a-4358-a30d-cdeaed08a8ea became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing
(x4)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Started

Started container ingress-operator

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing
(x4)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Created

Created container: ingress-operator

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-5bd7c86784-46vmq_6cf3e529-1381-4a7e-b4ec-b8bc97d27874 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory")

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_b002e0bf-48b9-4b04-8c60-ae69b0976539 became leader

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-75qmm

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-75qmm

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Started

Started container kube-multus-additional-cni-plugins

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Created

Created container: kube-multus-additional-cni-plugins

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-multus

replicaset-controller

multus-admission-controller-5f54bf67d4

SuccessfulCreate

Created pod: multus-admission-controller-5f54bf67d4-ctssl

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f54bf67d4 to 1

openshift-multus

replicaset-controller

multus-admission-controller-5f54bf67d4

SuccessfulCreate

Created pod: multus-admission-controller-5f54bf67d4-ctssl

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5f54bf67d4 to 1

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" already present on machine

openshift-multus

multus

multus-admission-controller-5f54bf67d4-ctssl

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-multus

multus

multus-admission-controller-5f54bf67d4-ctssl

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" already present on machine

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Started

Started container kube-rbac-proxy

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulDelete

Deleted pod: multus-admission-controller-5f98f4f8d5-dg77f

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Killing

Stopping container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Killing

Stopping container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5f98f4f8d5-dg77f

Killing

Stopping container kube-rbac-proxy

openshift-multus

replicaset-controller

multus-admission-controller-5f98f4f8d5

SuccessfulDelete

Deleted pod: multus-admission-controller-5f98f4f8d5-dg77f

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-5f98f4f8d5 to 0 from 1

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-5f98f4f8d5 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

Created

Created container: kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-5d87bf58c-2492q_6a014a5f-bda3-4bdd-b8b1-9d401cd115f2 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "optional secret/webhook-authenticator has been created"

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531655

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531655

SuccessfulCreate

Created pod: collect-profiles-29531655-kw6fn

openshift-operator-lifecycle-manager

multus

collect-profiles-29531655-kw6fn

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},    "apiServerArguments": map[string]any{    "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "authentication-token-webhook-config-file": []any{ +  string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), +  }, +  "authentication-token-webhook-version": []any{string("v1")},    "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},    "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},    ... // 6 identical entries    },    "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},    "gracefulTerminationDuration": string("15"),    ... // 2 identical entries   }

openshift-controller-manager

replicaset-controller

controller-manager-56b6d9c5b7

SuccessfulCreate

Created pod: controller-manager-56b6d9c5b7-lxwt6

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531655-kw6fn

Started

Started container collect-profiles

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.33"

openshift-route-controller-manager

replicaset-controller

route-controller-manager-56cd46585c

SuccessfulDelete

Deleted pod: route-controller-manager-56cd46585c-nhkd9

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-676fddcd58 to 1 from 0

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-route-controller-manager

kubelet

route-controller-manager-56cd46585c-nhkd9

Killing

Stopping container route-controller-manager

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-56b6d9c5b7 to 1 from 0

openshift-controller-manager

kubelet

controller-manager-57df7db547-2v9c5

Killing

Stopping container controller-manager

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531655-kw6fn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531655-kw6fn

Created

Created container: collect-profiles

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager

replicaset-controller

controller-manager-57df7db547

SuccessfulDelete

Deleted pod: controller-manager-57df7db547-2v9c5

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-57df7db547 to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-56cd46585c to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-584cc7bcb5-c7fgn_37390b9a-6a0b-43b9-ae73-e35660a7bee8 became leader

openshift-route-controller-manager

replicaset-controller

route-controller-manager-676fddcd58

SuccessfulCreate

Created pod: route-controller-manager-676fddcd58-49xzd

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0 I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0224 02:04:44.754192 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0224 02:04:44.754202 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0 F0224 02:05:28.765452 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no pods available on any node.")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

multus

route-controller-manager-676fddcd58-49xzd

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-7bcfbc574b-tl97n_390078d8-4ef0-4eb6-8deb-d6af026a387d became leader

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

Started

Started container route-controller-manager

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-56b6d9c5b7-lxwt6 became leader

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

ProbeError

Readiness probe error: Get "https://10.128.0.80:8443/healthz": dial tcp 10.128.0.80:8443: connect: connection refused body:

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531655, condition: Complete

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

Unhealthy

Readiness probe failed: Get "https://10.128.0.80:8443/healthz": dial tcp 10.128.0.80:8443: connect: connection refused

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531655

Completed

Job completed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

Created

Created container: route-controller-manager

openshift-controller-manager

multus

controller-manager-56b6d9c5b7-lxwt6

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-676fddcd58-49xzd_96047748-8676-4c2a-9be5-851c30e36185 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-75qmm

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing
(x6)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

BackOff

Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

BackOff

Back-off restarting failed container approver in pod network-node-identity-p5b6q_openshift-network-node-identity(adc1097b-c1ab-4f09-965d-1c819671475b)
(x4)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine
(x4)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x4)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup
(x2)

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine
(x2)

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Started

Started container approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-p5b6q

Created

Created container: approver

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine
(x6)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Started

Started container config-sync-controllers
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Created

Created container: config-sync-controllers
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Started

Started container cluster-cloud-controller-manager
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

Created

Created container: cluster-cloud-controller-manager
(x2)

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

BackOff

Back-off restarting failed container marketplace-operator in pod marketplace-operator-6f5488b997-4qf9p_openshift-marketplace(91d16f7b-390a-4d9d-99d6-cc8e210801d1)
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

BackOff

Back-off restarting failed container manager in pod operator-controller-controller-manager-9cc7d7bb-hvr8b_openshift-operator-controller(4a2d8ef6-14ac-490d-a931-7082344d3f46)
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

BackOff

Back-off restarting failed container manager in pod catalogd-controller-manager-84b8d9d697-jhklz_openshift-catalogd(4f5b3b93-a59d-495c-a311-8913fa6000fc)
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

BackOff

Back-off restarting failed container manager in pod catalogd-controller-manager-84b8d9d697-jhklz_openshift-catalogd(4f5b3b93-a59d-495c-a311-8913fa6000fc)
(x4)

openshift-ingress-operator

kubelet

ingress-operator-6569778c84-6dlqb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-84b8d9d697-jhklz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" already present on machine
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-9cc7d7bb-hvr8b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

Started

Started container machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

Created

Created container: machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa" already present on machine
(x4)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-8l58x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" already present on machine
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-8l58x

Started

Started container snapshot-controller
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-8l58x

Created

Created container: snapshot-controller
(x2)

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

Started

Started container controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

Created

Created container: controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Started

Started container control-plane-machine-set-operator

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

BackOff

Back-off restarting failed container ovnkube-cluster-manager in pod ovnkube-control-plane-5d8dfcdc87-bb22k_openshift-ovn-kubernetes(523033b8-4101-4a55-8320-55bef04ddaaf)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" already present on machine
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" already present on machine
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Started

Started container control-plane-machine-set-operator
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

Created

Created container: control-plane-machine-set-operator
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Created

Created container: ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5d8dfcdc87-bb22k

Started

Started container ovnkube-cluster-manager
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Started

Started container cluster-baremetal-operator
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Started

Started container cluster-baremetal-operator

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev
(x12)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6847bb4785-8l58x

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-d6bb9bb76-k98fq_openshift-machine-api(7b4e3ba0-5194-4e20-8f12-dea4b67504fe)
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-d6bb9bb76-k98fq_openshift-machine-api(7b4e3ba0-5194-4e20-8f12-dea4b67504fe)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

installer errors: installer: icy-controller-config", (string) (len=29) "controller-manager-kubeconfig", (string) (len=38) "kube-controller-cert-syncer-kubeconfig", (string) (len=17) "serviceaccount-ca", (string) (len=10) "service-ca", (string) (len=15) "recycler-config" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "cloud-config" }, CertSecretNames: ([]string) (len=2 cap=2) { (string) (len=39) "kube-controller-manager-client-cert-key", (string) (len=10) "csr-signer" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0 I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0224 02:15:10.645377 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0224 02:15:10.645396 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0 F0224 02:15:54.653869 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-scripts)\nEtcdEndpointsDegraded: applying configmap update failed :the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-endpoints)\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-scripts)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: applying configmap update failed :the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-endpoints)\nEtcdMembersDegraded: No unhealthy members found"
(x4)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-1-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" already present on machine
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" already present on machine

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-serving-ca)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/openshift-oauth-apiserver: could not be retrieved"),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Created

Created container: cluster-baremetal-operator
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-d6bb9bb76-k98fq

Created

Created container: cluster-baremetal-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config-2 -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-serving-ca)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded message changed from "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-serving-ca)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-serving-ca)\"\nAPIServerWorkloadDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "WorkloadDegraded: \"openshift-controller-manager\" \"config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nWorkloadDegraded: ",Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.",Available changed from False to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "WorkloadDegraded: \"openshift-controller-manager\" \"config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nWorkloadDegraded: " to "All is well",Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7." to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 8."

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-retry-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-2-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-2-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-2-retry-1-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-2-retry-1-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)"
(x3)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-sl5hz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" already present on machine
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-sl5hz

Started

Started container openshift-apiserver-operator
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-8586dccc9b-sl5hz

Created

Created container: openshift-apiserver-operator

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" to "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:44.733686 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754129 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:44.754192 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.754202 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:44.763645 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:05:14.764084 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:05:28.765452 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)"

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigDaemonFailed

Failed to resync 4.18.33 because: failed to apply machine config daemon manifests: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io mcd-prometheus-k8s)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:42.535411 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552775 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552834 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.552844 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.559182 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0224 02:04:52.563313 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0224 02:05:16.564097 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:36.563033 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:56.562039 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:06:10.565409 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0224 02:06:10.565469 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: "
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: ernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0224 02:04:42.535411 1 cmd.go:413] Getting controller reference for node master-0 I0224 02:04:42.552775 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0224 02:04:42.552834 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0224 02:04:42.552844 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0224 02:04:42.559182 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0224 02:04:52.563313 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0224 02:05:16.564097 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0224 02:05:36.563033 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0224 02:05:56.562039 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0224 02:06:10.565409 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0224 02:06:10.565469 1 cmd.go:109] timed out waiting for the condition

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"
(x5)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallCheckFailed

install timeout
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NeedsReinstall

apiServices not installed
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.33"}]
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.33"
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14"

openshift-kube-controller-manager

static-pod-installer

installer-2-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)"
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy
(x4)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:42.535411 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552775 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552834 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.552844 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.559182 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0224 02:04:52.563313 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0224 02:05:16.564097 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:36.563033 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:56.562039 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:06:10.565409 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0224 02:06:10.565469 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:42.535411 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552775 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552834 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.552844 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.559182 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0224 02:04:52.563313 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0224 02:05:16.564097 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:36.563033 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:56.562039 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:06:10.565409 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0224 02:06:10.565469 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: "

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-node-reader)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator)\nKubeAPIServerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_91a60537-5475-4891-95a8-687ca236c973 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-node-reader)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler

multus

installer-3-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:leader-election-lock-cluster-policy-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6847bb4785-8l58x

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6847bb4785-8l58x became leader

openshift-kube-scheduler

kubelet

installer-3-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:leader-election-lock-cluster-policy-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:leader-election-lock-cluster-policy-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-scheduler

kubelet

installer-3-retry-1-master-0

Created

Created container: installer

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: \nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-kube-scheduler

kubelet

installer-3-retry-1-master-0

Started

Started container installer

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: \nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: \nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: \nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-network-node-identity

master-0_dc9b0ef0-6104-4a57-987d-f0be7f8b99b9

ovnkube-identity

LeaderElection

master-0_dc9b0ef0-6104-4a57-987d-f0be7f8b99b9 became leader

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io catalogd-leader-election-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io operator-controller-manager-role)\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:leader-election-lock-cluster-policy-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:controller:namespace-security-allocation-controller)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints kubernetes)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:15:10.631140 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645273 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:15:10.645377 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.645396 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:15:10.649637 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0224 02:15:40.649713 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0224 02:15:54.653869 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 2 because static pod is ready

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_9d8f663d-b745-4a6c-aa88-4770d655e5ec became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing
(x455)

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0224 02:04:42.535411 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552775 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0224 02:04:42.552834 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.552844 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0224 02:04:42.559182 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0224 02:04:52.563313 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0224 02:05:16.564097 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:36.563033 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:05:56.562039 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0224 02:06:10.565409 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0224 02:06:10.565469 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-kube-scheduler

kubelet

installer-3-retry-1-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing
(x18)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

Failed to create installer pod for revision 3 count 1 on node "master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0": dial tcp 172.30.0.1:443: connect: connection refused

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigDaemonFailed

Failed to resync 4.18.33 because: failed to apply machine config daemon manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-machine-config-operator/rolebindings/machine-config-daemon": dial tcp 172.30.0.1:443: connect: connection refused
(x9)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.33 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused

default

kubelet

master-0

Starting

Starting kubelet.

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-597975fc65-xcl6c

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-597975fc65-xcl6c

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-version

kubelet

cluster-version-operator-57476485-9cjj5

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-version

kubelet

cluster-version-operator-57476485-9cjj5

FailedMount

MountVolume.SetUp failed for volume "service-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-2qn8m

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f94476f49-c5wlk

FailedMount

MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-ingress-canary

kubelet

ingress-canary-jjpsc

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

FailedMount

MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-7dd9c7d7b9-sjqsx

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-2qn8m

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-drf28

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-drf28

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

FailedMount

MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-6968c58f46-fcr59

FailedMount

MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-2qn8m

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-65c5c48b9b-bkc9s

FailedMount

MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition
(x4)

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory
(x4)

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"kube-apiserver" "1.31.14"} {"operator" "4.18.33"}]

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition
(x4)

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-2qn8m

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-hfpql

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-54cb48566c-xzpl4

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-5c7cf458b4-dsjgm

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-686847ff5f-ckntz

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59b498fcfb-dbkwd

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-754bc4d665-66lml

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-59584d565f-f6f26

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-75d56db95f-9gkp2

FailedMount

MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-86b8dc6d6-mtrdk

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-6dbff8cb4c-swtr6

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-2qn8m

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-67dd8d7969-8znkt

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_a7d703cc-d069-4ed3-a31d-17060ed4d4db became leader
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-5f54bf67d4-ctssl

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-2qn8m

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-ckntz_00cabf34-3879-4848-a16c-5445a58dafd8

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-ckntz_00cabf34-3879-4848-a16c-5445a58dafd8 became leader

openshift-machine-api

control-plane-machine-set-operator-686847ff5f-ckntz_00cabf34-3879-4848-a16c-5445a58dafd8

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-686847ff5f-ckntz_00cabf34-3879-4848-a16c-5445a58dafd8 became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub
(x7)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors
(x22)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14"
(x22)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.33"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreateFailed

Failed to create Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator: secrets "next-service-account-private-key" already exists

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-5df5ffc47c to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-console-operator

replicaset-controller

console-operator-5df5ffc47c

SuccessfulCreate

Created pod: console-operator-5df5ffc47c-gmjbd

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: ",Progressing changed from False to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-console-operator

multus

console-operator-5df5ffc47c-gmjbd

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-console-operator

kubelet

console-operator-5df5ffc47c-gmjbd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")
(x12)

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it changed

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "
(x13)

openshift-ingress

kubelet

router-default-7b65dc9fcb-22sgl

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists" to "NodeControllerDegraded: All master nodes are ready"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-003.pub

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-console-operator

kubelet

console-operator-5df5ffc47c-gmjbd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b" in 2.729s (2.729s including waiting). Image size: 512134379 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-003.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"
(x3)

openshift-console-operator

kubelet

console-operator-5df5ffc47c-gmjbd

BackOff

Back-off restarting failed container console-operator in pod console-operator-5df5ffc47c-gmjbd_openshift-console-operator(8ea06201-f138-475b-86de-769d333048cb)

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-cloud-controller-manager-operator

master-0_b9345449-88d2-4f91-90e1-1b0a75edf056

cluster-cloud-config-sync-leader

LeaderElection

master-0_b9345449-88d2-4f91-90e1-1b0a75edf056 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},    "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, +  "authConfig": map[string]any{ +  "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), +  },    "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},    "gracefulTerminationDuration": string("15"),    ... // 2 identical entries   }

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication

replicaset-controller

oauth-openshift-95876988f

SuccessfulCreate

Created pod: oauth-openshift-95876988f-c58ls

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-95876988f to 1

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-operator-controller

operator-controller-controller-manager-9cc7d7bb-hvr8b_70336347-58fd-4581-90fa-7fac39b62098

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-9cc7d7bb-hvr8b_70336347-58fd-4581-90fa-7fac39b62098 became leader
(x3)

openshift-authentication

kubelet

oauth-openshift-95876988f-c58ls

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-authentication

multus

oauth-openshift-95876988f-c58ls

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-authentication

kubelet

oauth-openshift-95876988f-c58ls

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-catalogd

catalogd-controller-manager-84b8d9d697-jhklz_1d352497-972c-4da1-98fe-2fe00a72ef34

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-jhklz_1d352497-972c-4da1-98fe-2fe00a72ef34 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-catalogd

catalogd-controller-manager-84b8d9d697-jhklz_1d352497-972c-4da1-98fe-2fe00a72ef34

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-84b8d9d697-jhklz_1d352497-972c-4da1-98fe-2fe00a72ef34 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"12827\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 2, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00358f7d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-kube-controller-manager

static-pod-installer

openshift-kube-controller-manager

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-authentication

kubelet

oauth-openshift-95876988f-c58ls

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" in 2.944s (2.944s including waiting). Image size: 481353554 bytes.

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3")

openshift-cloud-controller-manager-operator

master-0_1f83251b-f20b-4bdf-8ee1-ff2408b929ff

cluster-cloud-controller-manager-leader

LeaderElection

master-0_1f83251b-f20b-4bdf-8ee1-ff2408b929ff became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager
(x7)

openshift-kube-apiserver

kubelet

installer-2-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered

openshift-authentication

kubelet

oauth-openshift-95876988f-c58ls

Started

Started container oauth-openshift

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication

kubelet

oauth-openshift-95876988f-c58ls

Created

Created container: oauth-openshift

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"
(x2)

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/oauth-openshift -n openshift-config-managed: configmaps "oauth-openshift" already exists

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x3)

openshift-console-operator

kubelet

console-operator-5df5ffc47c-gmjbd

Started

Started container console-operator
(x2)

openshift-console-operator

kubelet

console-operator-5df5ffc47c-gmjbd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b" already present on machine
(x3)

openshift-console-operator

kubelet

console-operator-5df5ffc47c-gmjbd

Created

Created container: console-operator

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-5df5ffc47c-gmjbd_877818b9-f5eb-4387-b73c-ae92f762468a became leader
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.33"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.33"}]

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found"

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\n- \t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n \t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n \t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n \t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n \t},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-kube-scheduler

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-apiserver

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"operator" "4.18.33"} {"kube-scheduler" "1.31.14"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.33"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_dd63f964-8f3e-45a5-9b01-654e3799efa6 became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_bf3e5170-68d8-4ccf-9ff7-6eb540446f6d became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "All is well"

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_4516e37a-cc82-4d74-9eb5-3ca2dd4dccb7 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapUpdated

Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_707f4e8d-77fc-4ca2-b779-aa7bf4865ae6 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing
(x3)

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-cluster-machine-approver

master-0_c62a9368-54c8-44fa-882a-529ac8c3d161

cluster-machine-approver-leader

LeaderElection

master-0_c62a9368-54c8-44fa-882a-529ac8c3d161 became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 0 replicas available"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7e373bb5"...)}},    "controllers": []any{    ... // 8 identical elements    string("openshift.io/deploymentconfig"),    string("openshift.io/image-import"),    strings.Join({ +  "-",    "openshift.io/image-puller-rolebindings",    }, ""),    string("openshift.io/image-signature-import"),    string("openshift.io/image-trigger"),    ... // 2 identical elements    string("openshift.io/origin-namespace"),    string("openshift.io/serviceaccount"),    strings.Join({ +  "-",    "openshift.io/serviceaccount-pull-secrets",    }, ""),    string("openshift.io/templateinstance"),    string("openshift.io/templateinstancefinalizer"),    string("openshift.io/unidling"),    },    "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f7696d1b6"...)}},    "featureGates": []any{string("BuildCSIVolumes=true")},    "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"
(x3)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-kube-apiserver

kubelet

installer-4-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 5"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-5-master-0

Started

Started container installer
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.")

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 4 because static pod is ready

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5."
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

ProbeError

Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused body:

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true
(x2)

openshift-console

controllermanager

downloads

NoPods

No matching pods found

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_7ea7ad45-9607-4716-a47c-7b81198c4c93 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-576f8c76bf to 1

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-5d9ddb8754 to 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-c67bf58c9 to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-56b6d9c5b7 to 0 from 1

openshift-authentication

replicaset-controller

oauth-openshift-7f7cbb95f8

SuccessfulCreate

Created pod: oauth-openshift-7f7cbb95f8-pfw2n

openshift-console

replicaset-controller

console-576f8c76bf

SuccessfulCreate

Created pod: console-576f8c76bf-2xx46

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-7f7cbb95f8 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-6cf66f6dd4 to 1 from 0

openshift-controller-manager

replicaset-controller

controller-manager-c67bf58c9

SuccessfulCreate

Created pod: controller-manager-c67bf58c9-mn7dg

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_93ec0eef-c94a-4c1e-a88c-2f4115995109 became leader

openshift-controller-manager

replicaset-controller

controller-manager-56b6d9c5b7

SuccessfulDelete

Deleted pod: controller-manager-56b6d9c5b7-lxwt6

openshift-monitoring

replicaset-controller

monitoring-plugin-5d9ddb8754

SuccessfulCreate

Created pod: monitoring-plugin-5d9ddb8754-xtrdd

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-5d9ddb8754 to 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-6cf66f6dd4

SuccessfulCreate

Created pod: route-controller-manager-6cf66f6dd4-lbnq4

openshift-monitoring

replicaset-controller

monitoring-plugin-5d9ddb8754

SuccessfulCreate

Created pod: monitoring-plugin-5d9ddb8754-xtrdd

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-676fddcd58 to 0 from 1

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-79f587d78f to 1

openshift-network-console

replicaset-controller

networking-console-plugin-79f587d78f

SuccessfulCreate

Created pod: networking-console-plugin-79f587d78f-6bkc6

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-95876988f to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-676fddcd58

SuccessfulDelete

Deleted pod: route-controller-manager-676fddcd58-49xzd

openshift-authentication

replicaset-controller

oauth-openshift-95876988f

SuccessfulDelete

Deleted pod: oauth-openshift-95876988f-c58ls

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-955b69498 to 1

openshift-console

replicaset-controller

downloads-955b69498

SuccessfulCreate

Created pod: downloads-955b69498-x847l
(x4)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

EtcdEndpointsErrorUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-route-controller-manager

kubelet

route-controller-manager-676fddcd58-49xzd

Killing

Stopping container route-controller-manager

openshift-authentication

kubelet

oauth-openshift-95876988f-c58ls

Killing

Stopping container oauth-openshift
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ScriptControllerErrorUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-controller-manager

kubelet

controller-manager-56b6d9c5b7-lxwt6

Killing

Stopping container controller-manager

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-controller-manager

kubelet

controller-manager-c67bf58c9-mn7dg

Created

Created container: controller-manager

openshift-console

kubelet

downloads-955b69498-x847l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085"

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b"

openshift-console

multus

downloads-955b69498-x847l

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-6bkc6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbffd1dbbfea8326edd5142aaed93290359c152c805239f2ffc77a21b6648490"

openshift-monitoring

multus

monitoring-plugin-5d9ddb8754-xtrdd

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-c67bf58c9-mn7dg

Started

Started container controller-manager

openshift-controller-manager

kubelet

controller-manager-c67bf58c9-mn7dg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b"

openshift-monitoring

multus

monitoring-plugin-5d9ddb8754-xtrdd

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-network-console

multus

networking-console-plugin-79f587d78f-6bkc6

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-controller-manager

multus

controller-manager-c67bf58c9-mn7dg

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-console

kubelet

console-576f8c76bf-2xx46

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657"

openshift-route-controller-manager

multus

route-controller-manager-6cf66f6dd4-lbnq4

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-6cf66f6dd4-lbnq4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-6cf66f6dd4-lbnq4

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-6cf66f6dd4-lbnq4

Started

Started container route-controller-manager

openshift-console

multus

console-576f8c76bf-2xx46

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-6bkc6

Created

Created container: networking-console-plugin

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" in 2.067s (2.067s including waiting). Image size: 447705420 bytes.

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" in 2.067s (2.067s including waiting). Image size: 447705420 bytes.

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Started

Started container monitoring-plugin

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-6bkc6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbffd1dbbfea8326edd5142aaed93290359c152c805239f2ffc77a21b6648490" in 1.959s (1.959s including waiting). Image size: 446757716 bytes.

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Created

Created container: monitoring-plugin

openshift-network-console

kubelet

networking-console-plugin-79f587d78f-6bkc6

Started

Started container networking-console-plugin

openshift-console

kubelet

console-576f8c76bf-2xx46

Started

Started container console

openshift-console

kubelet

console-576f8c76bf-2xx46

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" in 4.082s (4.082s including waiting). Image size: 633766177 bytes.

openshift-console

kubelet

console-576f8c76bf-2xx46

Created

Created container: console

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 3.",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"16032\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 20, 54, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0037e2060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Started

Started container marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Unhealthy

Readiness probe failed: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

ProbeError

Readiness probe error: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused body:

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Created

Created container: marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-6f5488b997-4qf9p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" already present on machine

openshift-authentication

kubelet

oauth-openshift-7f7cbb95f8-pfw2n

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-7f7cbb95f8-pfw2n

Created

Created container: oauth-openshift

openshift-authentication

multus

oauth-openshift-7f7cbb95f8-pfw2n

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-7f7cbb95f8-pfw2n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" already present on machine

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-k98fq_7ea08551-7445-4ff2-bc46-a30542e76b47

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-k98fq_7ea08551-7445-4ff2-bc46-a30542e76b47 became leader

openshift-machine-api

cluster-baremetal-operator-d6bb9bb76-k98fq_7ea08551-7445-4ff2-bc46-a30542e76b47

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-d6bb9bb76-k98fq_7ea08551-7445-4ff2-bc46-a30542e76b47 became leader
(x2)

openshift-console

kubelet

console-576f8c76bf-2xx46

ProbeError

Startup probe error: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused body:
(x2)

openshift-console

kubelet

console-576f8c76bf-2xx46

Unhealthy

Startup probe failed: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused

openshift-console

kubelet

downloads-955b69498-x847l

Created

Created container: download-server

openshift-console

kubelet

downloads-955b69498-x847l

Started

Started container download-server

openshift-console

kubelet

downloads-955b69498-x847l

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085" in 33.114s (33.114s including waiting). Image size: 2895784037 bytes.
(x2)

openshift-console

kubelet

downloads-955b69498-x847l

Unhealthy

Readiness probe failed: Get "http://10.128.0.94:8080/": dial tcp 10.128.0.94:8080: connect: connection refused
(x2)

openshift-console

kubelet

downloads-955b69498-x847l

ProbeError

Readiness probe error: Get "http://10.128.0.94:8080/": dial tcp 10.128.0.94:8080: connect: connection refused body:
(x6)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.33_openshift"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"}] to [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"} {"oauth-openshift" "4.18.33_openshift"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/downloads\": dial tcp 172.30.0.1:443: connect: connection refused",Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-xrqvm

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well")

openshift-image-registry

kubelet

node-ca-xrqvm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:016ce2c441bfe2106222cd1285f2db09e8cf3712396d4bfbb52fdacb832aa1da"

openshift-image-registry

kubelet

node-ca-xrqvm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:016ce2c441bfe2106222cd1285f2db09e8cf3712396d4bfbb52fdacb832aa1da" in 2.27s (2.27s including waiting). Image size: 481536115 bytes.

openshift-image-registry

kubelet

node-ca-xrqvm

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-xrqvm

Started

Started container node-ca

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 2 to 3 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"16032\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 20, 54, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0037e2060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"),Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 3." to "",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.71.94:443/healthz\": dial tcp 172.30.71.94:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"16032\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 20, 54, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0037e2060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"16032\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 20, 54, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0037e2060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "All is well"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"eb13ef9a-d7fb-415d-87b3-663787bef747\", ResourceVersion:\"16032\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 24, 1, 57, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 24, 2, 20, 54, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0037e2060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-69565684c5 to 1

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-4run762hnmqqc -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

replicaset-controller

thanos-querier-69565684c5

SuccessfulCreate

Created pod: thanos-querier-69565684c5-snfqm

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-69565684c5 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-4run762hnmqqc -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe"

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-monitoring

replicaset-controller

thanos-querier-69565684c5

SuccessfulCreate

Created pod: thanos-querier-69565684c5-snfqm

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

multus

thanos-querier-69565684c5-snfqm

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1"

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" in 1.567s (1.567s including waiting). Image size: 437808562 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

multus

thanos-querier-69565684c5-snfqm

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" in 1.567s (1.567s including waiting). Image size: 437808562 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-56t3bo1jupebb -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-67ddc7b799 to 1

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553"

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553"

openshift-monitoring

replicaset-controller

metrics-server-67ddc7b799

SuccessfulCreate

Created pod: metrics-server-67ddc7b799-zlnvf

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-7b9cc5984b to 0 from 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-67ddc7b799 to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-7b9cc5984b to 0 from 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-56t3bo1jupebb -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

metrics-server-67ddc7b799

SuccessfulCreate

Created pod: metrics-server-67ddc7b799-zlnvf

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-cc55f5fb6 to 1

openshift-monitoring

replicaset-controller

metrics-server-7b9cc5984b

SuccessfulDelete

Deleted pod: metrics-server-7b9cc5984b-smpdl

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-cc55f5fb6 to 1

openshift-monitoring

replicaset-controller

telemeter-client-cc55f5fb6

SuccessfulCreate

Created pod: telemeter-client-cc55f5fb6-hcn4g

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Killing

Stopping container metrics-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

telemeter-client-cc55f5fb6

SuccessfulCreate

Created pod: telemeter-client-cc55f5fb6-hcn4g

openshift-monitoring

replicaset-controller

metrics-server-7b9cc5984b

SuccessfulDelete

Deleted pod: metrics-server-7b9cc5984b-smpdl

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" in 2.658s (2.658s including waiting). Image size: 502604403 bytes.

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" in 2.658s (2.658s including waiting). Image size: 502604403 bytes.

openshift-monitoring

kubelet

metrics-server-7b9cc5984b-smpdl

Killing

Stopping container metrics-server

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

multus

telemeter-client-cc55f5fb6-hcn4g

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c"

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-19m8gtk5v5gsq -n openshift-monitoring because it was missing

openshift-monitoring

multus

metrics-server-67ddc7b799-zlnvf

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Created

Created container: metrics-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-19m8gtk5v5gsq -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: thanos-query

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229"

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" already present on machine

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

multus

telemeter-client-cc55f5fb6-hcn4g

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c"

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229"

openshift-monitoring

multus

metrics-server-67ddc7b799-zlnvf

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" in 1.387s (1.387s including waiting). Image size: 412998070 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" in 3.145s (3.145s including waiting). Image size: 467433909 bytes.

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" in 1.387s (1.387s including waiting). Image size: 412998070 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" in 3.145s (3.145s including waiting). Image size: 467433909 bytes.

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" in 2.374s (2.374s including waiting). Image size: 480427687 bytes.

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" in 2.374s (2.374s including waiting). Image size: 480427687 bytes.

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Created

Created container: telemeter-client

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" already present on machine

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Created

Created container: telemeter-client

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92"

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Started

Started container telemeter-client

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92"

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-cc55f5fb6-hcn4g

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-79b5f69b87 to 1

openshift-console

replicaset-controller

console-79b5f69b87

SuccessfulCreate

Created pod: console-79b5f69b87-9qbb4

openshift-console

multus

console-79b5f69b87-9qbb4

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" in 4.289s (4.289s including waiting). Image size: 605597321 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" in 4.289s (4.289s including waiting). Image size: 605597321 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console

kubelet

console-79b5f69b87-9qbb4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-console

kubelet

console-79b5f69b87-9qbb4

Started

Started container console

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-console

kubelet

console-79b5f69b87-9qbb4

Created

Created container: console

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-8586dccc9b-sl5hz_48997232-07ae-47f8-a617-0277fdfbb90e became leader

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-576f8c76bf to 0 from 1

openshift-console

kubelet

console-576f8c76bf-2xx46

Killing

Stopping container console

openshift-console

replicaset-controller

console-576f8c76bf

SuccessfulDelete

Deleted pod: console-576f8c76bf-2xx46

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-console

replicaset-controller

console-5d9776c47f

SuccessfulCreate

Created pod: console-5d9776c47f-6p4nc

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5d9776c47f to 1

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-console

kubelet

console-5d9776c47f-6p4nc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-console

multus

console-5d9776c47f-6p4nc

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.33, 1 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"

openshift-console

kubelet

console-5d9776c47f-6p4nc

Started

Started container console

openshift-console

kubelet

console-5d9776c47f-6p4nc

Created

Created container: console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5d9776c47f to 0 from 1

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6647cb86fc to 1 from 0

openshift-console

replicaset-controller

console-5d9776c47f

SuccessfulDelete

Deleted pod: console-5d9776c47f-6p4nc

openshift-console

replicaset-controller

console-6647cb86fc

SuccessfulCreate

Created pod: console-6647cb86fc-wzjr8

openshift-console

kubelet

console-6647cb86fc-wzjr8

Created

Created container: console

openshift-console

kubelet

console-5d9776c47f-6p4nc

Killing

Stopping container console

openshift-console

kubelet

console-6647cb86fc-wzjr8

Started

Started container console

openshift-console

kubelet

console-6647cb86fc-wzjr8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-console

multus

console-6647cb86fc-wzjr8

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-c67bf58c9-mn7dg became leader

openshift-console

kubelet

console-79b5f69b87-9qbb4

Killing

Stopping container console

openshift-console

replicaset-controller

console-79b5f69b87

SuccessfulDelete

Deleted pod: console-79b5f69b87-9qbb4

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-79b5f69b87 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_0b6aa0e4-63aa-40d1-8483-d0fbe2c5874e became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_fcfb2887-56b7-4fd0-b9be-7d1c7aa45a67 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 4 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_aa08ff96-c795-4b18-bec6-da630df8f62a became leader

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulCreate

Created pod: sushy-emulator-78f6d7d749-q2bh9

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-78f6d7d749 to 1

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

sushy-emulator

multus

sushy-emulator-78f6d7d749-q2bh9

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-q2bh9

Pulling

Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490"

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-q2bh9

Created

Created container: sushy-emulator

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-q2bh9

Started

Started container sushy-emulator

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-q2bh9

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" in 6.837s (6.837s including waiting). Image size: 325685589 bytes.

sushy-emulator

replicaset-controller

nova-console-poller-67cbf9ddc7

SuccessfulCreate

Created pod: nova-console-poller-67cbf9ddc7-sbfjc

sushy-emulator

deployment-controller

nova-console-poller

ScalingReplicaSet

Scaled up replica set nova-console-poller-67cbf9ddc7 to 1

sushy-emulator

multus

nova-console-poller-67cbf9ddc7-sbfjc

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 4.923s (4.923s including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Started

Started container console-poller-3682ad35-b8ce-417b-a6c2-2632e895716f

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Created

Created container: console-poller-3682ad35-b8ce-417b-a6c2-2632e895716f

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-6cf66f6dd4-lbnq4_7753db19-5d58-44bd-b279-185da4dc7bc8 became leader

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 397ms (398ms including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Created

Created container: console-poller-1cd57079-028e-47c1-9c27-9f0fcdd1ed46

sushy-emulator

kubelet

nova-console-poller-67cbf9ddc7-sbfjc

Started

Started container console-poller-1cd57079-028e-47c1-9c27-9f0fcdd1ed46

sushy-emulator

replicaset-controller

nova-console-recorder-856878b5df

SuccessfulCreate

Created pod: nova-console-recorder-856878b5df-4lhhs

sushy-emulator

deployment-controller

nova-console-recorder

ScalingReplicaSet

Scaled up replica set nova-console-recorder-856878b5df to 1

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

sushy-emulator

multus

nova-console-recorder-856878b5df-4lhhs

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 7.037s (7.037s including waiting). Image size: 664134874 bytes.

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Started

Started container console-recorder-1cd57079-028e-47c1-9c27-9f0fcdd1ed46

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Started

Started container console-recorder-3682ad35-b8ce-417b-a6c2-2632e895716f

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Created

Created container: console-recorder-3682ad35-b8ce-417b-a6c2-2632e895716f

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 400ms (400ms including waiting). Image size: 664134874 bytes.

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Created

Created container: console-recorder-1cd57079-028e-47c1-9c27-9f0fcdd1ed46

sushy-emulator

kubelet

nova-console-recorder-856878b5df-4lhhs

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Created

Created container: util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Started

Started container pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Created

Created container: pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.133s (1.134s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Created

Created container: extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf

Started

Started container extract

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

replicaset-controller

lvms-operator-7bbcf6487b

SuccessfulCreate

Created pod: lvms-operator-7bbcf6487b-nkgxz

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-7bbcf6487b to 1
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

replicaset-controller

lvms-operator-7bbcf6487b

SuccessfulCreate

Created pod: lvms-operator-7bbcf6487b-nkgxz

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-7bbcf6487b to 1
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

multus

lvms-operator-7bbcf6487b-nkgxz

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

openshift-storage

multus

lvms-operator-7bbcf6487b-nkgxz

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.339s (5.339s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Started

Started container manager

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.339s (5.339s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Started

Started container manager

openshift-storage

kubelet

lvms-operator-7bbcf6487b-nkgxz

Created

Created container: manager
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531670

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531670

SuccessfulCreate

Created pod: collect-profiles-29531670-t652n

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

SuccessfulCreate

Created pod: a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

openshift-operator-lifecycle-manager

multus

collect-profiles-29531670-t652n

AddedInterface

Add eth0 [10.128.0.114/23] from ovn-kubernetes

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

SuccessfulCreate

Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531670-t652n

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531670-t652n

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531670-t652n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Created

Created container: util

openshift-marketplace

multus

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908"

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

multus

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Created

Created container: util

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Started

Started container util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Started

Started container util

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1"

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531670, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531670

Completed

Job completed

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

SuccessfulCreate

Created pod: f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

openshift-marketplace

multus

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

AddedInterface

Add eth0 [10.128.0.117/23] from ovn-kubernetes

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Created

Created container: util

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Started

Started container util

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf"

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

SuccessfulCreate

Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Started

Started container util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Created

Created container: util

openshift-marketplace

multus

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6"

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" in 7.224s (7.224s including waiting). Image size: 329517 bytes.

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" in 4.173s (4.173s including waiting). Image size: 176636 bytes.

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Created

Created container: pull

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Started

Started container pull

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Started

Started container pull

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Created

Created container: pull

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Started

Started container extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Created

Created container: extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Started

Started container pull

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Started

Started container extract

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 9.888s (9.888s including waiting). Image size: 108352841 bytes.

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68

Created

Created container: extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 2.693s (2.693s including waiting). Image size: 4900233 bytes.

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Created

Created container: pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Started

Started container pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Created

Created container: extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Created

Created container: extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Started

Started container extract

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

Completed

Job completed

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx

Started

Started container extract

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

Completed

Job completed

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsNotMet

one or more requirements couldn't be found

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsNotMet

one or more requirements couldn't be found

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

Completed

Job completed

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

Completed

Job completed

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsUnknown

requirements not yet checked

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsUnknown

requirements not yet checked

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-694c9596b7 to 1

openshift-nmstate

replicaset-controller

nmstate-operator-694c9596b7

SuccessfulCreate

Created pod: nmstate-operator-694c9596b7-xp57m

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-694c9596b7 to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

replicaset-controller

nmstate-operator-694c9596b7

SuccessfulCreate

Created pod: nmstate-operator-694c9596b7-xp57m

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

AllRequirementsMet

all requirements found, attempting install
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

waiting for install components to report healthy
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce"

openshift-nmstate

multus

nmstate-operator-694c9596b7-xp57m

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-operator-694c9596b7-xp57m

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce"

openshift-nmstate

operator-lifecycle-manager

install-xx2rf

AppliedWithWarnings

1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202602041913" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Started

Started container nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 2.695s (2.695s including waiting). Image size: 451308023 bytes.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Started

Started container nmstate-operator

openshift-nmstate

operator-lifecycle-manager

install-xx2rf

AppliedWithWarnings

1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202602041913" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 2.695s (2.695s including waiting). Image size: 451308023 bytes.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-xp57m

Created

Created container: nmstate-operator

metallb-system

replicaset-controller

metallb-operator-webhook-server-559d754c8d

SuccessfulCreate

Created pod: metallb-operator-webhook-server-559d754c8d-8sgn7

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-7577845998 to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

install strategy completed with no errors

metallb-system

replicaset-controller

metallb-operator-controller-manager-7577845998

SuccessfulCreate

Created pod: metallb-operator-controller-manager-7577845998-zvq74

metallb-system

replicaset-controller

metallb-operator-controller-manager-7577845998

SuccessfulCreate

Created pod: metallb-operator-controller-manager-7577845998-zvq74

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

install strategy completed with no errors

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-559d754c8d to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-559d754c8d

SuccessfulCreate

Created pod: metallb-operator-webhook-server-559d754c8d-8sgn7

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-559d754c8d to 1

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-7577845998 to 1

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854"

metallb-system

multus

metallb-operator-controller-manager-7577845998-zvq74

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854"

metallb-system

multus

metallb-operator-controller-manager-7577845998-zvq74

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

metallb-system

multus

metallb-operator-webhook-server-559d754c8d-8sgn7

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e"

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e"

metallb-system

multus

metallb-operator-webhook-server-559d754c8d-8sgn7

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

metallb-system

operator-lifecycle-manager

install-vmhp7

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

install-vmhp7

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

NeedsReinstall

calculated deployment install is bad

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

NeedsReinstall

calculated deployment install is bad
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

AllRequirementsMet

all requirements found, attempting install
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

AllRequirementsMet

all requirements found, attempting install
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Started

Started container webhook-server

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 7.65s (7.65s including waiting). Image size: 462337664 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Created

Created container: manager

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Created

Created container: webhook-server

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Created

Created container: manager

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Started

Started container manager

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 7.65s (7.65s including waiting). Image size: 462337664 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-7577845998-zvq74

Started

Started container manager

metallb-system

metallb-operator-controller-manager-7577845998-zvq74_76ef1ea5-54bd-476f-80a8-fc4a84c1a321

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-7577845998-zvq74_76ef1ea5-54bd-476f-80a8-fc4a84c1a321 became leader
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

waiting for install components to report healthy

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 7.408s (7.408s including waiting). Image size: 554925471 bytes.

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 7.408s (7.408s including waiting). Image size: 554925471 bytes.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Created

Created container: webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Started

Started container webhook-server

metallb-system

metallb-operator-controller-manager-7577845998-zvq74_76ef1ea5-54bd-476f-80a8-fc4a84c1a321

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-7577845998-zvq74_76ef1ea5-54bd-476f-80a8-fc4a84c1a321 became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

waiting for install components to report healthy
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-j4m97
(x5)

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

FailedCreate

Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-j4m97

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1
(x5)

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

FailedCreate

Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found

cert-manager

multus

cert-manager-webhook-6888856db4-j4m97

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

multus

cert-manager-webhook-6888856db4-j4m97

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1
(x10)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found
(x10)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-hhm6l

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-hhm6l

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install
(x11)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install
(x11)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.272s (6.272s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.272s (6.272s including waiting). Image size: 319887149 bytes.

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-2lpl8

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-2lpl8

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-66946c8978 to 2

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

cert-manager

multus

cert-manager-cainjector-5545bd876-hhm6l

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Started

Started container cert-manager-webhook

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-66946c8978

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-2lpl8

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-cainjector-5545bd876-hhm6l

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

cert-manager

kubelet

cert-manager-cainjector-5545bd876-hhm6l

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

cert-manager

kubelet

cert-manager-cainjector-5545bd876-hhm6l

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-hhm6l

Started

Started container cert-manager-cainjector

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-66946c8978

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-2lpl8

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Created

Created container: cert-manager-webhook

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-66946c8978

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

cert-manager

multus

cert-manager-cainjector-5545bd876-hhm6l

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-66946c8978 to 2

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-j4m97

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-5545bd876-hhm6l

Created

Created container: cert-manager-cainjector

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

cert-manager

kubelet

cert-manager-cainjector-5545bd876-hhm6l

Started

Started container cert-manager-cainjector

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-rpqh9

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-rpqh9

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-66946c8978

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-8lklf

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-8lklf

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

openshift-operators

multus

perses-operator-5bf474d74f-rpqh9

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

openshift-operators

multus

perses-operator-5bf474d74f-rpqh9

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

openshift-operators

multus

observability-operator-59bdc8b94-8lklf

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

kube-system

cert-manager-cainjector-5545bd876-hhm6l_e40e0220-bc69-4a27-80a2-8bd4d70aa8c9

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-5545bd876-hhm6l_e40e0220-bc69-4a27-80a2-8bd4d70aa8c9 became leader

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-operators

multus

observability-operator-59bdc8b94-8lklf

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-54xdp

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-54xdp

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

cert-manager

multus

cert-manager-545d4d4674-54xdp

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

cert-manager

multus

cert-manager-545d4d4674-54xdp

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-545d4d4674-54xdp

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

cert-manager

kubelet

cert-manager-545d4d4674-54xdp

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 9.403s (9.403s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 9.803s (9.803s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 9.42s (9.42s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 9.384s (9.384s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 9.403s (9.403s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 9.48s (9.48s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 9.803s (9.803s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 9.42s (9.42s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 9.384s (9.384s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 9.48s (9.48s including waiting). Image size: 151103408 bytes.

cert-manager

kubelet

cert-manager-545d4d4674-54xdp

Started

Started container cert-manager-controller

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Started

Started container prometheus-operator

cert-manager

kubelet

cert-manager-545d4d4674-54xdp

Created

Created container: cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-54xdp

Started

Started container cert-manager-controller

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Started

Started container perses-operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-2lpl8

Started

Started container prometheus-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Created

Created container: perses-operator

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Started

Started container operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Started

Started container perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Created

Created container: operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-qbg2d

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5bf474d74f-rpqh9

Created

Created container: perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-66946c8978-9t8v8

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Created

Created container: operator

openshift-operators

kubelet

observability-operator-59bdc8b94-8lklf

Started

Started container operator

cert-manager

kubelet

cert-manager-545d4d4674-54xdp

Created

Created container: cert-manager-controller

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

install strategy completed with no errors

metallb-system

replicaset-controller

frr-k8s-webhook-server-78b44bf5bb

SuccessfulCreate

Created pod: frr-k8s-webhook-server-78b44bf5bb-lthbs

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-69bbfbf88f to 1

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-gll2f

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 628802b4-88d9-4496-846d-bea1a1f065a2] does not exist in namespace ""

metallb-system

replicaset-controller

controller-69bbfbf88f

SuccessfulCreate

Created pod: controller-69bbfbf88f-s2t6d

metallb-system

replicaset-controller

frr-k8s-webhook-server-78b44bf5bb

SuccessfulCreate

Created pod: frr-k8s-webhook-server-78b44bf5bb-lthbs

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-69bbfbf88f to 1

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-lbfkl

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1

metallb-system

kubelet

frr-k8s-gll2f

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-gll2f

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-lbfkl

metallb-system

replicaset-controller

controller-69bbfbf88f

SuccessfulCreate

Created pod: controller-69bbfbf88f-s2t6d

metallb-system

kubelet

frr-k8s-gll2f

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Created

Created container: controller

metallb-system

kubelet

frr-k8s-gll2f

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Started

Started container controller
(x2)

metallb-system

kubelet

speaker-lbfkl

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Created

Created container: controller

metallb-system

multus

controller-69bbfbf88f-s2t6d

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

metallb-system

multus

controller-69bbfbf88f-s2t6d

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Started

Started container controller
(x2)

metallb-system

kubelet

speaker-lbfkl

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

multus

frr-k8s-webhook-server-78b44bf5bb-lthbs

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

metallb-system

kubelet

frr-k8s-gll2f

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

multus

frr-k8s-webhook-server-78b44bf5bb-lthbs

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-866bcb46dc to 1

metallb-system

kubelet

speaker-lbfkl

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-866bcb46dc to 1

openshift-nmstate

replicaset-controller

nmstate-webhook-866bcb46dc

SuccessfulCreate

Created pod: nmstate-webhook-866bcb46dc-rft7d

metallb-system

kubelet

speaker-lbfkl

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-58c85c668d to 1

openshift-nmstate

replicaset-controller

nmstate-metrics-58c85c668d

SuccessfulCreate

Created pod: nmstate-metrics-58c85c668d-zx9wt

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-bpzvz

openshift-nmstate

replicaset-controller

nmstate-webhook-866bcb46dc

SuccessfulCreate

Created pod: nmstate-webhook-866bcb46dc-rft7d

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-58c85c668d to 1

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5c78fc5d65

SuccessfulCreate

Created pod: nmstate-console-plugin-5c78fc5d65-nsdtc

openshift-nmstate

replicaset-controller

nmstate-metrics-58c85c668d

SuccessfulCreate

Created pod: nmstate-metrics-58c85c668d-zx9wt

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-bpzvz

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5c78fc5d65

SuccessfulCreate

Created pod: nmstate-console-plugin-5c78fc5d65-nsdtc

openshift-nmstate

multus

nmstate-metrics-58c85c668d-zx9wt

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078"

openshift-nmstate

multus

nmstate-console-plugin-5c78fc5d65-nsdtc

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

metallb-system

kubelet

speaker-lbfkl

Created

Created container: speaker

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Started

Started container kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.346s (1.359s including waiting). Image size: 464984427 bytes.

openshift-nmstate

multus

nmstate-console-plugin-5c78fc5d65-nsdtc

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078"

metallb-system

kubelet

speaker-lbfkl

Created

Created container: speaker

metallb-system

kubelet

speaker-lbfkl

Started

Started container speaker

metallb-system

kubelet

speaker-lbfkl

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

multus

nmstate-webhook-866bcb46dc-rft7d

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.346s (1.359s including waiting). Image size: 464984427 bytes.

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

multus

nmstate-webhook-866bcb46dc-rft7d

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

metallb-system

kubelet

speaker-lbfkl

Started

Started container speaker
(x10)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

openshift-nmstate

multus

nmstate-metrics-58c85c668d-zx9wt

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-console

replicaset-controller

console-7db5f64756

SuccessfulCreate

Created pod: console-7db5f64756-h92rx
(x5)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml
(x4)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 1 replicas available"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7db5f64756 to 1

metallb-system

kubelet

speaker-lbfkl

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

metallb-system

kubelet

controller-69bbfbf88f-s2t6d

Started

Started container kube-rbac-proxy

openshift-console

kubelet

console-7db5f64756-h92rx

Created

Created container: console

openshift-console

kubelet

console-7db5f64756-h92rx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine

openshift-console

kubelet

console-7db5f64756-h92rx

Started

Started container console

openshift-console

multus

console-7db5f64756-h92rx

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

metallb-system

kubelet

speaker-lbfkl

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-lbfkl

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 819ms (819ms including waiting). Image size: 464984427 bytes.

metallb-system

kubelet

speaker-lbfkl

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-lbfkl

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-lbfkl

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-lbfkl

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 819ms (819ms including waiting). Image size: 464984427 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.286s (8.286s including waiting). Image size: 662037039 bytes.

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.005s (8.005s including waiting). Image size: 662037039 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.286s (8.286s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 5.916s (5.916s including waiting). Image size: 453642085 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 5.916s (5.916s including waiting). Image size: 453642085 bytes.

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.19s (6.19s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 8.005s (8.005s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.57s (6.57s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.068s (6.068s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.57s (6.57s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.068s (6.068s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.19s (6.19s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Started

Started container nmstate-webhook

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container cp-reloader

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Started

Started container nmstate-metrics

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container cp-frr-files

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Started

Started container nmstate-handler

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-handler-bpzvz

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-lthbs

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Started

Started container nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-zx9wt

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Created

Created container: nmstate-console-plugin

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container cp-frr-files

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-nsdtc

Started

Started container nmstate-console-plugin

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container cp-reloader

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-rft7d

Started

Started container nmstate-webhook

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container cp-metrics

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-545d4d4674-54xdp-external-cert-manager-controller became leader

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container controller

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: frr

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: controller

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: controller

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container frr

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container controller

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: frr

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container frr

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container reloader

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container reloader

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-gll2f

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-gll2f

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-gll2f

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6647cb86fc to 0 from 1
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.33, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 2 replicas available"
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

openshift-console

kubelet

console-6647cb86fc-wzjr8

Killing

Stopping container console

openshift-console

replicaset-controller

console-6647cb86fc

SuccessfulDelete

Deleted pod: console-6647cb86fc-wzjr8

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-q84n6

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-q84n6

openshift-storage

multus

vg-manager-q84n6

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openshift-storage

multus

vg-manager-q84n6

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes
(x2)

openshift-storage

kubelet

vg-manager-q84n6

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-q84n6

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-q84n6

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-q84n6

Created

Created container: vg-manager
(x2)

openshift-storage

kubelet

vg-manager-q84n6

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-q84n6

Created

Created container: vg-manager
(x15)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io
(x15)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openstack-operators

multus

openstack-operator-index-jjl54

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-index-jjl54

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-jjl54

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-jjl54

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-jjl54

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.016s (1.016s including waiting). Image size: 918506145 bytes.

openstack-operators

kubelet

openstack-operator-index-jjl54

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.016s (1.016s including waiting). Image size: 918506145 bytes.

openstack-operators

kubelet

openstack-operator-index-jjl54

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-jjl54

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-jjl54

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-jjl54

Started

Started container registry-server
(x9)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-jjl54

Killing

Stopping container registry-server

openstack-operators

kubelet

openstack-operator-index-jjl54

Killing

Stopping container registry-server

openstack-operators

kubelet

openstack-operator-index-kxwsj

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 437ms (437ms including waiting). Image size: 918506145 bytes.

openstack-operators

kubelet

openstack-operator-index-kxwsj

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-kxwsj

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-kxwsj

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 437ms (437ms including waiting). Image size: 918506145 bytes.

openstack-operators

multus

openstack-operator-index-kxwsj

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-index-kxwsj

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-kxwsj

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-kxwsj

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-kxwsj

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-kxwsj

Created

Created container: registry-server

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.170.11:50051: connect: connection refused"

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

SuccessfulCreate

Created pod: 11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

SuccessfulCreate

Created pod: 11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

openstack-operators

multus

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

multus

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Started

Started container util

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Created

Created container: util

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Started

Started container util

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Created

Created container: util

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d"

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d"

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d" in 1.115s (1.115s including waiting). Image size: 115772 bytes.

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Started

Started container pull

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Created

Created container: pull

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:5de87989637b6d22555d7bde45e2a2d14c6ec08d" in 1.115s (1.115s including waiting). Image size: 115772 bytes.

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Created

Created container: pull

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Started

Started container pull

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Created

Created container: extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Started

Started container extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Created

Created container: extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Started

Started container extract

openstack-operators

kubelet

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

Completed

Job completed

openstack-operators

job-controller

11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda149afda

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-55c649df44 to 1

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

replicaset-controller

openstack-operator-controller-init-55c649df44

SuccessfulCreate

Created pod: openstack-operator-controller-init-55c649df44-lm7cq

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-55c649df44 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-init-55c649df44

SuccessfulCreate

Created pod: openstack-operator-controller-init-55c649df44-lm7cq

openstack-operators

multus

openstack-operator-controller-init-55c649df44-lm7cq

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-controller-init-55c649df44-lm7cq

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785"

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785"

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" in 5.059s (5.059s including waiting). Image size: 293229892 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Created

Created container: operator

openstack-operators

openstack-operator-controller-init-55c649df44-lm7cq_0ff9bc63-ebe2-4269-9637-0c57596fc1a2

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-55c649df44-lm7cq_0ff9bc63-ebe2-4269-9637-0c57596fc1a2 became leader

openstack-operators

openstack-operator-controller-init-55c649df44-lm7cq_0ff9bc63-ebe2-4269-9637-0c57596fc1a2

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-55c649df44-lm7cq_0ff9bc63-ebe2-4269-9637-0c57596fc1a2 became leader

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" in 5.059s (5.059s including waiting). Image size: 293229892 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-55c649df44-lm7cq

Started

Started container operator

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-smhqr"

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-xhl5c"

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-smhqr"

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-xhl5c"

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-rcjz7"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-rcjz7"

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-66kfz"

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-w7bsj"

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-w7bsj"

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-66kfz"

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-jflwf"

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-57l25"

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-57l25"

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-jflwf"

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-tjf5s"

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-tjf5s"

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-s7gcg"

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-s7gcg"

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-b72xt

openstack-operators

replicaset-controller

barbican-operator-controller-manager-868647ff47

SuccessfulCreate

Created pod: barbican-operator-controller-manager-868647ff47-2ldv2

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-6d8bf5c495

SuccessfulCreate

Created pod: designate-operator-controller-manager-6d8bf5c495-dzbvc

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-b72xt

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-m84j5"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

barbican-operator-controller-manager-868647ff47

SuccessfulCreate

Created pod: barbican-operator-controller-manager-868647ff47-2ldv2

openstack-operators

replicaset-controller

glance-operator-controller-manager-784b5bb6c5

SuccessfulCreate

Created pod: glance-operator-controller-manager-784b5bb6c5-zfd69

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-784b5bb6c5 to 1

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

infra-operator-controller-manager-5f879c76b6

SuccessfulCreate

Created pod: infra-operator-controller-manager-5f879c76b6-2kk8t

openstack-operators

replicaset-controller

glance-operator-controller-manager-784b5bb6c5

SuccessfulCreate

Created pod: glance-operator-controller-manager-784b5bb6c5-zfd69

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-784b5bb6c5 to 1

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-5f879c76b6

SuccessfulCreate

Created pod: infra-operator-controller-manager-5f879c76b6-2kk8t

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-5b9b8895d5

SuccessfulCreate

Created pod: horizon-operator-controller-manager-5b9b8895d5-49gvb

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-6d8bf5c495

SuccessfulCreate

Created pod: designate-operator-controller-manager-6d8bf5c495-dzbvc

openstack-operators

replicaset-controller

heat-operator-controller-manager-69f49c598c

SuccessfulCreate

Created pod: heat-operator-controller-manager-69f49c598c-5t6bt

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-69f49c598c to 1

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-5b9b8895d5

SuccessfulCreate

Created pod: horizon-operator-controller-manager-5b9b8895d5-49gvb

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-m84j5"

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

heat-operator-controller-manager-69f49c598c

SuccessfulCreate

Created pod: heat-operator-controller-manager-69f49c598c-5t6bt

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-69f49c598c to 1

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-5dc6794d5b

SuccessfulCreate

Created pod: test-operator-controller-manager-5dc6794d5b-4djnj

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

replicaset-controller

octavia-operator-controller-manager-659dc6bbfc

SuccessfulCreate

Created pod: octavia-operator-controller-manager-659dc6bbfc-74cdr

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-579b7786b9

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-579b7786b9 to 1

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-qxzpw

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-5dc6794d5b to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

replicaset-controller

keystone-operator-controller-manager-b4d948c87

SuccessfulCreate

Created pod: keystone-operator-controller-manager-b4d948c87-ws6cb

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-567668f5cf to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-567668f5cf

SuccessfulCreate

Created pod: nova-operator-controller-manager-567668f5cf-nffrm

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-579b7786b9 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-589c568786

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-589c568786-kwb4z

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

openstack-operator-controller-manager-5dc486cffc

SuccessfulCreate

Created pod: openstack-operator-controller-manager-5dc486cffc-q59hq

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-5dc486cffc to 1

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-579b7786b9

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-68f46476f to 1

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

replicaset-controller

placement-operator-controller-manager-8497b45c89

SuccessfulCreate

Created pod: placement-operator-controller-manager-8497b45c89-nn47h

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-psxsg

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-554564d7fc

SuccessfulCreate

Created pod: ironic-operator-controller-manager-554564d7fc-hksp2

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-659dc6bbfc to 1

openstack-operators

replicaset-controller

octavia-operator-controller-manager-659dc6bbfc

SuccessfulCreate

Created pod: octavia-operator-controller-manager-659dc6bbfc-74cdr

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-rsh8v"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

replicaset-controller

swift-operator-controller-manager-68f46476f

SuccessfulCreate

Created pod: swift-operator-controller-manager-68f46476f-pztlf

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-659dc6bbfc to 1

openstack-operators

replicaset-controller

ovn-operator-controller-manager-5955d8c787

SuccessfulCreate

Created pod: ovn-operator-controller-manager-5955d8c787-55b7d

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-qxzpw

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-567668f5cf to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-567668f5cf

SuccessfulCreate

Created pod: nova-operator-controller-manager-567668f5cf-nffrm

openstack-operators

replicaset-controller

ovn-operator-controller-manager-5955d8c787

SuccessfulCreate

Created pod: ovn-operator-controller-manager-5955d8c787-55b7d

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-5955d8c787 to 1

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-6bd4687957 to 1

openstack-operators

replicaset-controller

neutron-operator-controller-manager-6bd4687957

SuccessfulCreate

Created pod: neutron-operator-controller-manager-6bd4687957-lwlws

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-rsh8v"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-5955d8c787 to 1

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-4pjvq

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-68f46476f

SuccessfulCreate

Created pod: swift-operator-controller-manager-68f46476f-pztlf

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-68f46476f to 1

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-74n8j"

openstack-operators

replicaset-controller

ironic-operator-controller-manager-554564d7fc

SuccessfulCreate

Created pod: ironic-operator-controller-manager-554564d7fc-hksp2

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-6bd4687957 to 1

openstack-operators

replicaset-controller

neutron-operator-controller-manager-6bd4687957

SuccessfulCreate

Created pod: neutron-operator-controller-manager-6bd4687957-lwlws

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-74n8j"

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-589c568786

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-589c568786-kwb4z

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-589c568786 to 1

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

replicaset-controller

placement-operator-controller-manager-8497b45c89

SuccessfulCreate

Created pod: placement-operator-controller-manager-8497b45c89-nn47h

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

keystone-operator-controller-manager-b4d948c87

SuccessfulCreate

Created pod: keystone-operator-controller-manager-b4d948c87-ws6cb

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

test-operator-controller-manager-5dc6794d5b

SuccessfulCreate

Created pod: test-operator-controller-manager-5dc6794d5b-4djnj

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-5dc6794d5b to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-psxsg

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-5dc486cffc to 1

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-6994f66f48

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-6994f66f48-5xt4j

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-6994f66f48

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-6994f66f48-5xt4j

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-589c568786 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-5dc486cffc

SuccessfulCreate

Created pod: openstack-operator-controller-manager-5dc486cffc-q59hq

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-4pjvq

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc"

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642"

openstack-operators

multus

designate-operator-controller-manager-6d8bf5c495-dzbvc

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

designate-operator-controller-manager-6d8bf5c495-dzbvc

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

multus

barbican-operator-controller-manager-868647ff47-2ldv2

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-ftzwl"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

multus

glance-operator-controller-manager-784b5bb6c5-zfd69

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-ftzwl"

openstack-operators

multus

glance-operator-controller-manager-784b5bb6c5-zfd69

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642"

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be"

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

barbican-operator-controller-manager-868647ff47-2ldv2

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-b72xt

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be"

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-b72xt

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

multus

telemetry-operator-controller-manager-589c568786-kwb4z

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838"

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

nova-operator-controller-manager-567668f5cf-nffrm

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

multus

test-operator-controller-manager-5dc6794d5b-4djnj

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

multus

swift-operator-controller-manager-68f46476f-pztlf

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

multus

manila-operator-controller-manager-67d996989d-psxsg

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1"

openstack-operators

multus

keystone-operator-controller-manager-b4d948c87-ws6cb

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

mariadb-operator-controller-manager-6994f66f48-5xt4j

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

multus

ovn-operator-controller-manager-5955d8c787-55b7d

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a"

openstack-operators

multus

ironic-operator-controller-manager-554564d7fc-hksp2

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

multus

horizon-operator-controller-manager-5b9b8895d5-49gvb

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06"

openstack-operators

multus

neutron-operator-controller-manager-6bd4687957-lwlws

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf"

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06"

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867"

openstack-operators

multus

ironic-operator-controller-manager-554564d7fc-hksp2

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

octavia-operator-controller-manager-659dc6bbfc-74cdr

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9cs5h"

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2"

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

octavia-operator-controller-manager-659dc6bbfc-74cdr

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

heat-operator-controller-manager-69f49c598c-5t6bt

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

multus

manila-operator-controller-manager-67d996989d-psxsg

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

multus

swift-operator-controller-manager-68f46476f-pztlf

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

multus

placement-operator-controller-manager-8497b45c89-nn47h

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-4pjvq

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

multus

mariadb-operator-controller-manager-6994f66f48-5xt4j

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

neutron-operator-controller-manager-6bd4687957-lwlws

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da"

openstack-operators

multus

nova-operator-controller-manager-567668f5cf-nffrm

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838"

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9cs5h"

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-4pjvq

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

multus

test-operator-controller-manager-5dc6794d5b-4djnj

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

multus

telemetry-operator-controller-manager-589c568786-kwb4z

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

multus

horizon-operator-controller-manager-5b9b8895d5-49gvb

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

multus

keystone-operator-controller-manager-b4d948c87-ws6cb

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-pq2p6"

openstack-operators

multus

ovn-operator-controller-manager-5955d8c787-55b7d

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-pq2p6"

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1"

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

multus

heat-operator-controller-manager-69f49c598c-5t6bt

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

placement-operator-controller-manager-8497b45c89-nn47h

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da"

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Failed

Failed to pull image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192": pull QPS exceeded

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-2cfkz"

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Failed

Failed to pull image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc": pull QPS exceeded

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-2cfkz"

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd"

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Failed

Error: ErrImagePull

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98"

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04"

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Failed

Failed to pull image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192": pull QPS exceeded

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Failed

Failed to pull image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc": pull QPS exceeded

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04"

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98"

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192"
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Failed

Error: ImagePullBackOff
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc"

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-c7299"

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192"

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-88wmv"

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-88wmv"

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc"
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-c7299"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-lbhg8"

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-dpkfl"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-dpkfl"

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-lbhg8"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-nw9j4"

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-nw9j4"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-nz9lg"

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-nz9lg"

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-cvgwr"

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-f9gjz"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-f9gjz"

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-cvgwr"

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-4cwwk"

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-4cwwk"

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc"
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc"

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192"
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192"

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 20.313s (20.313s including waiting). Image size: 191425982 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 20.313s (20.313s including waiting). Image size: 191425982 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be" in 21.152s (21.152s including waiting). Image size: 191991232 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 19.864s (19.864s including waiting). Image size: 189413585 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" in 20.489s (20.489s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 20.517s (20.517s including waiting). Image size: 190376908 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 20.668s (20.668s including waiting). Image size: 195315176 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 19.74s (19.74s including waiting). Image size: 190936524 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 20.668s (20.668s including waiting). Image size: 195315176 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 19.74s (19.74s including waiting). Image size: 190936524 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 19.74s (19.74s including waiting). Image size: 190626789 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be" in 21.152s (21.152s including waiting). Image size: 191991232 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" in 20.489s (20.489s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 19.74s (19.74s including waiting). Image size: 190626789 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 19.86s (19.86s including waiting). Image size: 191665087 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 20.517s (20.517s including waiting). Image size: 190376908 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 19.86s (19.86s including waiting). Image size: 191665087 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 19.864s (19.864s including waiting). Image size: 189413585 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 21.855s (21.855s including waiting). Image size: 191103449 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 21.081s (21.081s including waiting). Image size: 191605671 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 21.855s (21.855s including waiting). Image size: 191103449 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 21.081s (21.081s including waiting). Image size: 191605671 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 21.017s (21.017s including waiting). Image size: 192091569 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Created

Created container: manager

openstack-operators

designate-operator-controller-manager-6d8bf5c495-dzbvc_5b89c5c5-bcbe-48bf-a6d6-d899c35aed6d

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-6d8bf5c495-dzbvc_5b89c5c5-bcbe-48bf-a6d6-d899c35aed6d became leader

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Created

Created container: manager

openstack-operators

designate-operator-controller-manager-6d8bf5c495-dzbvc_5b89c5c5-bcbe-48bf-a6d6-d899c35aed6d

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-6d8bf5c495-dzbvc_5b89c5c5-bcbe-48bf-a6d6-d899c35aed6d became leader

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192" in 4.023s (4.023s including waiting). Image size: 190114714 bytes.

openstack-operators

mariadb-operator-controller-manager-6994f66f48-5xt4j_5b20471d-b745-4479-8b93-064b87855724

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-6994f66f48-5xt4j_5b20471d-b745-4479-8b93-064b87855724 became leader

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98" in 21.114s (21.114s including waiting). Image size: 188905403 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-nn47h

Created

Created container: manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 21.041s (21.041s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 21.041s (21.041s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" in 21.825s (21.825s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" in 21.825s (21.825s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 21.017s (21.017s including waiting). Image size: 192091569 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 21.697s (21.697s including waiting). Image size: 193562469 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Created

Created container: manager

openstack-operators

mariadb-operator-controller-manager-6994f66f48-5xt4j_5b20471d-b745-4479-8b93-064b87855724

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-6994f66f48-5xt4j_5b20471d-b745-4479-8b93-064b87855724 became leader

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc" in 5.095s (5.095s including waiting). Image size: 196099046 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192" in 4.023s (4.023s including waiting). Image size: 190114714 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98" in 21.114s (21.114s including waiting). Image size: 188905403 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Created

Created container: manager

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Started

Started container manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 21.766s (21.766s including waiting). Image size: 191246784 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 21.831s (21.831s including waiting). Image size: 193023123 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 21.697s (21.697s including waiting). Image size: 193562469 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-hksp2

Created

Created container: manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-b72xt

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-4pjvq

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Started

Started container manager

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-dzbvc

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-5xt4j

Created

Created container: manager

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Started

Started container manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 21.766s (21.766s including waiting). Image size: 191246784 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 21.831s (21.831s including waiting). Image size: 193023123 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc" in 5.095s (5.095s including waiting). Image size: 196099046 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-49gvb

Created

Created container: manager

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zfd69_83334093-955c-45b6-8dac-94a8e2be78b1

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-784b5bb6c5-zfd69_83334093-955c-45b6-8dac-94a8e2be78b1 became leader

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Started

Started container manager

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Created

Created container: manager

openstack-operators

telemetry-operator-controller-manager-589c568786-kwb4z_bbeae5ca-640f-4b60-8ecd-3b9caebb9ad3

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-589c568786-kwb4z_bbeae5ca-640f-4b60-8ecd-3b9caebb9ad3 became leader

openstack-operators

ironic-operator-controller-manager-554564d7fc-hksp2_7c48be35-b2d2-41b9-be2b-cc43f071a78a

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-554564d7fc-hksp2_7c48be35-b2d2-41b9-be2b-cc43f071a78a became leader

openstack-operators

nova-operator-controller-manager-567668f5cf-nffrm_783a0b81-6d8d-4300-b6e1-3b2630cf4b8e

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-567668f5cf-nffrm_783a0b81-6d8d-4300-b6e1-3b2630cf4b8e became leader

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Started

Started container manager

openstack-operators

watcher-operator-controller-manager-bccc79885-4pjvq_1192bf8b-eb51-4ef8-b869-2cc603e203a4

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-4pjvq_1192bf8b-eb51-4ef8-b869-2cc603e203a4 became leader

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-49gvb_b326cf49-1a7c-4d70-a2b4-e37add83e941

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-5b9b8895d5-49gvb_b326cf49-1a7c-4d70-a2b4-e37add83e941 became leader

openstack-operators

keystone-operator-controller-manager-b4d948c87-ws6cb_0a574903-6f07-4188-ae72-ff24335c5f9c

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-b4d948c87-ws6cb_0a574903-6f07-4188-ae72-ff24335c5f9c became leader

openstack-operators

test-operator-controller-manager-5dc6794d5b-4djnj_6f1e7307-3c58-4241-9c03-87c693d336c8

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-5dc6794d5b-4djnj_6f1e7307-3c58-4241-9c03-87c693d336c8 became leader

openstack-operators

placement-operator-controller-manager-8497b45c89-nn47h_dda2c8f4-b520-45ed-9afb-b65ccc3a8162

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-8497b45c89-nn47h_dda2c8f4-b520-45ed-9afb-b65ccc3a8162 became leader

openstack-operators

swift-operator-controller-manager-68f46476f-pztlf_16950fe9-5d04-4bf6-a174-e4b7cd1207ea

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-68f46476f-pztlf_16950fe9-5d04-4bf6-a174-e4b7cd1207ea became leader

openstack-operators

manila-operator-controller-manager-67d996989d-psxsg_89c957e7-4a5c-40d1-b666-58cfdc29c3f2

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-psxsg_89c957e7-4a5c-40d1-b666-58cfdc29c3f2 became leader

openstack-operators

barbican-operator-controller-manager-868647ff47-2ldv2_d0acac7b-e36d-461b-9d43-518f8ee9e554

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-868647ff47-2ldv2_d0acac7b-e36d-461b-9d43-518f8ee9e554 became leader

openstack-operators

ovn-operator-controller-manager-5955d8c787-55b7d_3304802f-22c1-4724-aa52-f1703946c172

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-5955d8c787-55b7d_3304802f-22c1-4724-aa52-f1703946c172 became leader

openstack-operators

neutron-operator-controller-manager-6bd4687957-lwlws_6576fd8c-661a-4cb0-961b-c3f348763f4f

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-6bd4687957-lwlws_6576fd8c-661a-4cb0-961b-c3f348763f4f became leader

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Created

Created container: manager

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-74cdr_36d6a3d7-73a7-40e9-a7b9-4f6b290c6175

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-659dc6bbfc-74cdr_36d6a3d7-73a7-40e9-a7b9-4f6b290c6175 became leader

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-b72xt_7c09ab1b-76f4-49ad-98aa-b850ca0ef0b6

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-b72xt_7c09ab1b-76f4-49ad-98aa-b850ca0ef0b6 became leader

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Started

Started container manager

openstack-operators

heat-operator-controller-manager-69f49c598c-5t6bt_954cbe13-00ef-4fd3-80da-8e7cdf05a4d3

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-69f49c598c-5t6bt_954cbe13-00ef-4fd3-80da-8e7cdf05a4d3 became leader

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Started

Started container manager

openstack-operators

nova-operator-controller-manager-567668f5cf-nffrm_783a0b81-6d8d-4300-b6e1-3b2630cf4b8e

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-567668f5cf-nffrm_783a0b81-6d8d-4300-b6e1-3b2630cf4b8e became leader

openstack-operators

ironic-operator-controller-manager-554564d7fc-hksp2_7c48be35-b2d2-41b9-be2b-cc43f071a78a

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-554564d7fc-hksp2_7c48be35-b2d2-41b9-be2b-cc43f071a78a became leader

openstack-operators

telemetry-operator-controller-manager-589c568786-kwb4z_bbeae5ca-640f-4b60-8ecd-3b9caebb9ad3

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-589c568786-kwb4z_bbeae5ca-640f-4b60-8ecd-3b9caebb9ad3 became leader

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Created

Created container: manager

openstack-operators

watcher-operator-controller-manager-bccc79885-4pjvq_1192bf8b-eb51-4ef8-b869-2cc603e203a4

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-4pjvq_1192bf8b-eb51-4ef8-b869-2cc603e203a4 became leader

openstack-operators

glance-operator-controller-manager-784b5bb6c5-zfd69_83334093-955c-45b6-8dac-94a8e2be78b1

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-784b5bb6c5-zfd69_83334093-955c-45b6-8dac-94a8e2be78b1 became leader

openstack-operators

heat-operator-controller-manager-69f49c598c-5t6bt_954cbe13-00ef-4fd3-80da-8e7cdf05a4d3

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-69f49c598c-5t6bt_954cbe13-00ef-4fd3-80da-8e7cdf05a4d3 became leader

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-ws6cb

Started

Started container manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-psxsg

Started

Started container manager

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-6bd4687957-lwlws

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-nffrm

Started

Started container manager

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-2ldv2

Created

Created container: manager

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-b72xt_7c09ab1b-76f4-49ad-98aa-b850ca0ef0b6

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-b72xt_7c09ab1b-76f4-49ad-98aa-b850ca0ef0b6 became leader

openstack-operators

octavia-operator-controller-manager-659dc6bbfc-74cdr_36d6a3d7-73a7-40e9-a7b9-4f6b290c6175

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-659dc6bbfc-74cdr_36d6a3d7-73a7-40e9-a7b9-4f6b290c6175 became leader

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-49gvb_b326cf49-1a7c-4d70-a2b4-e37add83e941

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-5b9b8895d5-49gvb_b326cf49-1a7c-4d70-a2b4-e37add83e941 became leader

openstack-operators

keystone-operator-controller-manager-b4d948c87-ws6cb_0a574903-6f07-4188-ae72-ff24335c5f9c

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-b4d948c87-ws6cb_0a574903-6f07-4188-ae72-ff24335c5f9c became leader

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-659dc6bbfc-74cdr

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Created

Created container: manager

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Started

Started container operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Created

Created container: operator

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-qxzpw_913ce9e4-862a-4a2b-92da-f4957a7ecf19

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-qxzpw_913ce9e4-862a-4a2b-92da-f4957a7ecf19 became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Created

Created container: operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-qxzpw

Started

Started container operator

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Created

Created container: manager

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-qxzpw_913ce9e4-862a-4a2b-92da-f4957a7ecf19

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-qxzpw_913ce9e4-862a-4a2b-92da-f4957a7ecf19 became leader

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-pztlf

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-589c568786-kwb4z

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-5dc6794d5b-4djnj

Started

Started container manager

openstack-operators

neutron-operator-controller-manager-6bd4687957-lwlws_6576fd8c-661a-4cb0-961b-c3f348763f4f

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-6bd4687957-lwlws_6576fd8c-661a-4cb0-961b-c3f348763f4f became leader

openstack-operators

ovn-operator-controller-manager-5955d8c787-55b7d_3304802f-22c1-4724-aa52-f1703946c172

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-5955d8c787-55b7d_3304802f-22c1-4724-aa52-f1703946c172 became leader

openstack-operators

barbican-operator-controller-manager-868647ff47-2ldv2_d0acac7b-e36d-461b-9d43-518f8ee9e554

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-868647ff47-2ldv2_d0acac7b-e36d-461b-9d43-518f8ee9e554 became leader

openstack-operators

manila-operator-controller-manager-67d996989d-psxsg_89c957e7-4a5c-40d1-b666-58cfdc29c3f2

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-psxsg_89c957e7-4a5c-40d1-b666-58cfdc29c3f2 became leader

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-5955d8c787-55b7d

Created

Created container: manager

openstack-operators

test-operator-controller-manager-5dc6794d5b-4djnj_6f1e7307-3c58-4241-9c03-87c693d336c8

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-5dc6794d5b-4djnj_6f1e7307-3c58-4241-9c03-87c693d336c8 became leader

openstack-operators

placement-operator-controller-manager-8497b45c89-nn47h_dda2c8f4-b520-45ed-9afb-b65ccc3a8162

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-8497b45c89-nn47h_dda2c8f4-b520-45ed-9afb-b65ccc3a8162 became leader

openstack-operators

swift-operator-controller-manager-68f46476f-pztlf_16950fe9-5d04-4bf6-a174-e4b7cd1207ea

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-68f46476f-pztlf_16950fe9-5d04-4bf6-a174-e4b7cd1207ea became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a"

openstack-operators

multus

infra-operator-controller-manager-5f879c76b6-2kk8t

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a"

openstack-operators

multus

infra-operator-controller-manager-5f879c76b6-2kk8t

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

multus

openstack-operator-controller-manager-5dc486cffc-q59hq

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

multus

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

multus

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-controller-manager-5dc486cffc-q59hq

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" already present on machine

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Started

Started container manager

openstack-operators

openstack-operator-controller-manager-5dc486cffc-q59hq_58dfd31c-fe6f-4cf9-8a3d-344b6f0db85a

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-5dc486cffc-q59hq_58dfd31c-fe6f-4cf9-8a3d-344b6f0db85a became leader

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:d180b5337633950545869f0838a038c3368d0907b85a2b984f70d5df9990d785" already present on machine

openstack-operators

openstack-operator-controller-manager-5dc486cffc-q59hq_58dfd31c-fe6f-4cf9-8a3d-344b6f0db85a

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-5dc486cffc-q59hq_58dfd31c-fe6f-4cf9-8a3d-344b6f0db85a became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 3.774s (3.774s including waiting). Image size: 192826291 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-2kk8t

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 3.774s (3.774s including waiting). Image size: 192826291 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 3.144s (3.144s including waiting). Image size: 190527593 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 3.144s (3.144s including waiting). Image size: 190527593 bytes.

openstack-operators

infra-operator-controller-manager-5f879c76b6-2kk8t_be40b57b-9f9d-4eb9-a6a5-330edf712eb2

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-5f879c76b6-2kk8t_be40b57b-9f9d-4eb9-a6a5-330edf712eb2 became leader

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz_f78a579b-567c-486a-b637-5d6870998a22

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz_f78a579b-567c-486a-b637-5d6870998a22 became leader

openstack-operators

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz_f78a579b-567c-486a-b637-5d6870998a22

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz_f78a579b-567c-486a-b637-5d6870998a22 became leader

openstack-operators

infra-operator-controller-manager-5f879c76b6-2kk8t_be40b57b-9f9d-4eb9-a6a5-330edf712eb2

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-5f879c76b6-2kk8t_be40b57b-9f9d-4eb9-a6a5-330edf712eb2 became leader
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrInitIssuer

Error initializing issuer: secrets "rootca-internal" not found
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-internal" not found

openstack

cert-manager-certificates-request-manager

rootca-public

Requested

Created new CertificateRequest resource "rootca-public-1"

openstack

cert-manager-certificates-key-manager

rootca-public

Generated

Stored new private key in temporary Secret resource "rootca-public-d2smn"

openstack

cert-manager-certificates-trigger

rootca-public

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-public

ErrInitIssuer

Error initializing issuer: secrets "rootca-public" not found
(x2)

openstack

cert-manager-issuers

rootca-public

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-public" not found

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rootca-public-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

rootca-public

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

rootca-internal

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-internal-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

rootca-libvirt

Generated

Stored new private key in temporary Secret resource "rootca-libvirt-lmw6m"

openstack

cert-manager-certificates-request-manager

rootca-libvirt

Requested

Created new CertificateRequest resource "rootca-libvirt-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-internal

Generated

Stored new private key in temporary Secret resource "rootca-internal-d4vvb"

openstack

cert-manager-certificates-request-manager

rootca-internal

Requested

Created new CertificateRequest resource "rootca-internal-1"

openstack

cert-manager-certificates-issuing

rootca-internal

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

rootca-libvirt

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-libvirt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-libvirt" not found
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrInitIssuer

Error initializing issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificaterequests-issuer-acme

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificates-issuing

rootca-libvirt

Issuing

The certificate has been successfully issued
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrInitIssuer

Error initializing issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificates-trigger

rootca-ovn

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

rootca-ovn

Generated

Stored new private key in temporary Secret resource "rootca-ovn-mc9jk"
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

dnsmasq-dns

IPAllocated

Assigned IP ["192.168.122.80"]
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-bc7f9869 to 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7d4c486879 to 1
(x3)

openstack

cert-manager-issuers

rootca-public

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificates-issuing

rootca-ovn

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

rootca-ovn

Requested

Created new CertificateRequest resource "rootca-ovn-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rootca-ovn-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-bc7f9869

SuccessfulCreate

Created pod: dnsmasq-dns-bc7f9869-4kmll
(x3)

openstack

cert-manager-issuers

rootca-internal

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificates-request-manager

rabbitmq-svc

Requested

Created new CertificateRequest resource "rabbitmq-svc-1"

openstack

cert-manager-certificates-key-manager

rabbitmq-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-svc-zv6jt"

openstack

cert-manager-certificates-trigger

rabbitmq-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rabbitmq-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

rabbitmq-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

replicaset-controller

dnsmasq-dns-7d4c486879

SuccessfulCreate

Created pod: dnsmasq-dns-7d4c486879-cr468

openstack

multus

dnsmasq-dns-bc7f9869-4kmll

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7d4c486879-cr468

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-nodes of Type *v1.Service

openstack

cert-manager-certificates-issuing

rabbitmq-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rabbitmq-cell1-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-lgrmz"

openstack

kubelet

dnsmasq-dns-bc7f9869-4kmll

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"

openstack

cert-manager-certificates-request-manager

rabbitmq-cell1-svc

Requested

Created new CertificateRequest resource "rabbitmq-cell1-svc-1"

openstack

multus

dnsmasq-dns-7d4c486879-cr468

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-plugins-conf of Type *v1.ConfigMap
(x3)

openstack

cert-manager-issuers

rootca-libvirt

KeyPairVerified

Signing CA verified

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

persistence-rabbitmq-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0"

openstack

replicaset-controller

dnsmasq-dns-6974cff98c

SuccessfulCreate

Created pod: dnsmasq-dns-6974cff98c-qbhgh

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-7d4c486879 to 0 from 1

default

endpoint-controller

rabbitmq

FailedToCreateEndpoint

Failed to create endpoint for service openstack/rabbitmq: endpoints "rabbitmq" already exists

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.RoleBinding

openstack

replicaset-controller

dnsmasq-dns-7d4c486879

SuccessfulDelete

Deleted pod: dnsmasq-dns-7d4c486879-cr468

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server-conf of Type *v1.ConfigMap

openstack

cert-manager-certificates-issuing

rabbitmq-cell1-svc

Issuing

The certificate has been successfully issued

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-default-user of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-erlang-cookie of Type *v1.Secret
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rabbitmq-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-6974cff98c to 1 from 0

openstack

metallb-controller

rabbitmq

IPAllocated

Assigned IP ["172.17.0.85"]

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq of Type *v1.Service

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-bc7f9869 to 0 from 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7c45d57b9c to 1 from 0

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-peer-discovery of Type *v1.Role

openstack

replicaset-controller

dnsmasq-dns-bc7f9869

SuccessfulDelete

Deleted pod: dnsmasq-dns-bc7f9869-4kmll

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

galera-openstack-svc

Requested

Created new CertificateRequest resource "galera-openstack-svc-1"

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-default-user of Type *v1.Secret

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

galera-openstack-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

galera-openstack-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x3)

openstack

cert-manager-issuers

rootca-ovn

KeyPairVerified

Signing CA verified

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-nodes of Type *v1.Service

openstack

replicaset-controller

dnsmasq-dns-7c45d57b9c

SuccessfulCreate

Created pod: dnsmasq-dns-7c45d57b9c-jf69p

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1 of Type *v1.Service

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap

openstack

cert-manager-certificates-key-manager

galera-openstack-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-svc-plbqm"

openstack

cert-manager-certificates-trigger

galera-openstack-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.RoleBinding

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

multus

dnsmasq-dns-6974cff98c-qbhgh

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack

metallb-controller

rabbitmq-cell1

IPAllocated

Assigned IP ["172.17.0.86"]
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificates-trigger

galera-openstack-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

multus

dnsmasq-dns-7c45d57b9c-jf69p

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd"

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Pod openstack-galera-0 in StatefulSet openstack-galera successful
(x2)

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-trigger

memcached-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

persistence-rabbitmq-cell1-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0"

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

persistence-rabbitmq-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-73225840-35e7-4008-8fed-f5170a782266

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

galera-openstack-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success

openstack

cert-manager-certificates-key-manager

galera-openstack-cell1-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-slnnk"

openstack

cert-manager-certificates-issuing

galera-openstack-cell1-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

galera-openstack-cell1-svc

Requested

Created new CertificateRequest resource "galera-openstack-cell1-svc-1"

openstack

cert-manager-certificates-key-manager

memcached-svc

Generated

Stored new private key in temporary Secret resource "memcached-svc-8sz4g"

openstack

cert-manager-certificaterequests-issuer-acme

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

persistence-rabbitmq-cell1-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-436e1541-7987-4390-8405-eaf459b61a91

openstack

cert-manager-certificates-issuing

memcached-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

mysql-db-openstack-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0"

openstack

cert-manager-certificates-request-manager

memcached-svc

Requested

Created new CertificateRequest resource "memcached-svc-1"

openstack

cert-manager-certificaterequests-approver

memcached-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Unhealthy

Readiness probe failed: Get "https://10.128.0.102:10250/livez": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

ProbeError

Readiness probe error: Get "https://10.128.0.102:10250/livez": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Unhealthy

Readiness probe failed: Get "https://10.128.0.102:10250/livez": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

ProbeError

Liveness probe error: Get "https://10.128.0.102:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openstack

statefulset-controller

memcached

SuccessfulCreate

create Pod memcached-0 in StatefulSet memcached successful

openstack

cert-manager-certificates-trigger

ovn-metrics

Issuing

Issuing certificate as Secret does not exist

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Unhealthy

Liveness probe failed: Get "https://10.128.0.102:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

ProbeError

Readiness probe error: Get "https://10.128.0.102:10250/livez": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

Unhealthy

Liveness probe failed: Get "https://10.128.0.102:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

metrics-server-67ddc7b799-zlnvf

ProbeError

Liveness probe error: Get "https://10.128.0.102:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openstack

kubelet

memcached-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:d6c93f70d8b142180af00baccabe84529baba1bb1e8bfd9bc2b58efb09aef590"

openstack

cert-manager-certificates-issuing

ovn-metrics

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

ovn-metrics-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

memcached-0

AddedInterface

Add eth0 [10.128.0.171/23] from ovn-kubernetes

openstack

cert-manager-certificates-key-manager

ovn-metrics

Generated

Stored new private key in temporary Secret resource "ovn-metrics-9pwxz"

openstack

cert-manager-certificaterequests-issuer-venafi

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovn-metrics

Requested

Created new CertificateRequest resource "ovn-metrics-1"

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

rabbitmq-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20"

openstack

multus

rabbitmq-server-0

AddedInterface

Add eth0 [10.128.0.169/23] from ovn-kubernetes

openstack

multus

rabbitmq-cell1-server-0

AddedInterface

Add eth0 [10.128.0.170/23] from ovn-kubernetes

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

mysql-db-openstack-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-e505a6ea-7c17-4298-9b54-895fbaced559

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

mysql-db-openstack-cell1-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0"

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

mysql-db-openstack-cell1-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-55ed44a5-d0eb-48d0-a59a-001d0b7a79dc

openstack

kubelet

rabbitmq-cell1-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20"

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ovnnorthd-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

neutron-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

ovncontroller-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ovncontroller-ovndbs

Generated

Stored new private key in temporary Secret resource "ovncontroller-ovndbs-dr74v"

openstack

cert-manager-certificates-request-manager

ovncontroller-ovndbs

Requested

Created new CertificateRequest resource "ovncontroller-ovndbs-1"

openstack

cert-manager-certificates-issuing

ovndbcluster-nb-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ovndbcluster-nb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1"

openstack

cert-manager-certificates-key-manager

ovndbcluster-nb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-jcbh7"

openstack

cert-manager-certificates-trigger

ovndbcluster-nb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovndbcluster-nb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovncontroller-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful

openstack

cert-manager-certificates-request-manager

ovnnorthd-ovndbs

Requested

Created new CertificateRequest resource "ovnnorthd-ovndbs-1"

openstack

cert-manager-certificates-key-manager

ovnnorthd-ovndbs

Generated

Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-js6qz"
(x2)

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificates-issuing

ovncontroller-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-key-manager

neutron-ovndbs

Generated

Stored new private key in temporary Secret resource "neutron-ovndbs-czpfx"

openstack

cert-manager-certificaterequests-issuer-vault

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0"

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ovnnorthd-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

daemonset-controller

ovn-controller-ovs

SuccessfulCreate

Created pod: ovn-controller-ovs-lp2wm

openstack

daemonset-controller

ovn-controller

SuccessfulCreate

Created pod: ovn-controller-hjmv9

openstack

cert-manager-certificates-trigger

ovndbcluster-sb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

neutron-ovndbs

Requested

Created new CertificateRequest resource "neutron-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovndbcluster-sb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

ovndbcluster-sb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-4k2s7"

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-6d9616cd-d188-43df-aab6-e0353beab110

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovndbcluster-sb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

neutron-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

ovnnorthd-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ovndbcluster-sb-ovndbs

Issuing

The certificate has been successfully issued

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0"

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-3b312a42-b48e-4c1f-857b-7a51612d6280

openstack

kubelet

memcached-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:d6c93f70d8b142180af00baccabe84529baba1bb1e8bfd9bc2b58efb09aef590" in 21.059s (21.059s including waiting). Image size: 277861580 bytes.

openstack

kubelet

rabbitmq-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" in 20.406s (20.406s including waiting). Image size: 304909899 bytes.

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 29.302s (29.302s including waiting). Image size: 679396694 bytes.

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" in 15.779s (15.779s including waiting). Image size: 304909899 bytes.

openstack

kubelet

dnsmasq-dns-bc7f9869-4kmll

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 31.179s (31.179s including waiting). Image size: 679396694 bytes.

openstack

kubelet

dnsmasq-dns-bc7f9869-4kmll

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7d4c486879-cr468

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 31.123s (31.123s including waiting). Image size: 679396694 bytes.

openstack

kubelet

memcached-0

Started

Started container memcached

openstack

multus

ovn-controller-hjmv9

AddedInterface

Add eth0 [10.128.0.174/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-hjmv9

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417"

openstack

kubelet

memcached-0

Created

Created container: memcached

openstack

multus

openstack-galera-0

AddedInterface

Add eth0 [10.128.0.172/23] from ovn-kubernetes

openstack

kubelet

openstack-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658"

openstack

kubelet

openstack-cell1-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658"

openstack

multus

openstack-cell1-galera-0

AddedInterface

Add eth0 [10.128.0.173/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-bc7f9869-4kmll

Started

Started container init

openstack

kubelet

dnsmasq-dns-7d4c486879-cr468

Started

Started container init

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Created

Created container: init

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:2847fc8e7f911c23656f50e02d4fd6275e9edecdc19e9d04cc999c0fcc5bf917"

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" in 28.581s (28.581s including waiting). Image size: 679396694 bytes.

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add eth0 [10.128.0.176/23] from ovn-kubernetes

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add internalapi [172.17.0.30/24] from openstack/internalapi

openstack

multus

ovn-controller-ovs-lp2wm

AddedInterface

Add eth0 [10.128.0.175/23] from ovn-kubernetes

openstack

multus

ovn-controller-ovs-lp2wm

AddedInterface

Add datacentre [] from openstack/datacentre

openstack

multus

ovn-controller-ovs-lp2wm

AddedInterface

Add ironic [172.20.1.30/24] from openstack/ironic

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add eth0 [10.128.0.177/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Started

Started container init

openstack

kubelet

dnsmasq-dns-7d4c486879-cr468

Created

Created container: init

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Started

Started container init
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq of Type *v1.Service
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-server of Type *v1.StatefulSet

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:05c8a64428215567969452413877b06edfb244f075c0161cf3059c3a27f8df85"

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add internalapi [172.17.0.31/24] from openstack/internalapi

openstack

kubelet

rabbitmq-server-0

Started

Started container setup-container

openstack

multus

ovn-controller-ovs-lp2wm

AddedInterface

Add tenant [172.19.0.30/24] from openstack/tenant

openstack

kubelet

rabbitmq-server-0

Created

Created container: setup-container

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

ovn-controller-ovs-lp2wm

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c"
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-server of Type *v1.StatefulSet

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Failed

Error: container create failed: mount `/var/lib/kubelet/pods/713f4764-f8a7-4867-bd77-54c68933ca65/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: setup-container

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container setup-container
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1 of Type *v1.Service
(x2)

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Started

Started container dnsmasq-dns

openstack

kubelet

openstack-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" in 7.719s (7.719s including waiting). Image size: 429866819 bytes.

openstack

kubelet

ovn-controller-ovs-lp2wm

Started

Started container ovsdb-server-init

openstack

kubelet

ovsdbserver-sb-0

Started

Started container ovsdbserver-sb

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c"

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:2847fc8e7f911c23656f50e02d4fd6275e9edecdc19e9d04cc999c0fcc5bf917" in 7.532s (7.532s including waiting). Image size: 347271462 bytes.

openstack

kubelet

ovn-controller-hjmv9

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417" in 7.967s (7.967s including waiting). Image size: 347092937 bytes.

openstack

kubelet

openstack-cell1-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

openstack-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:05c8a64428215567969452413877b06edfb244f075c0161cf3059c3a27f8df85" in 6.827s (6.827s including waiting). Image size: 347271461 bytes.

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: ovsdbserver-sb

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Started

Started container ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c"

openstack

kubelet

ovn-controller-ovs-lp2wm

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c" in 7.237s (7.237s including waiting). Image size: 324698130 bytes.

openstack

kubelet

openstack-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

ovn-controller-ovs-lp2wm

Created

Created container: ovsdb-server-init

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

openstack-cell1-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" in 7.966s (7.966s including waiting). Image size: 429866819 bytes.

openstack

kubelet

ovn-controller-ovs-lp2wm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c" already present on machine

openstack

kubelet

ovn-controller-hjmv9

Started

Started container ovn-controller

openstack

kubelet

ovn-controller-hjmv9

Created

Created container: ovn-controller

openstack

kubelet

dnsmasq-dns-6974cff98c-qbhgh

Killing

Stopping container dnsmasq-dns

openstack

kubelet

ovn-controller-ovs-lp2wm

Created

Created container: ovsdb-server

openstack

replicaset-controller

dnsmasq-dns-6974cff98c

SuccessfulDelete

Deleted pod: dnsmasq-dns-6974cff98c-qbhgh

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 2.024s (2.024s including waiting). Image size: 165206333 bytes.

openstack

kubelet

ovn-controller-ovs-lp2wm

Started

Started container ovsdb-server

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-6974cff98c to 0 from 1

openstack

kubelet

ovn-controller-ovs-lp2wm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:4ba5ad0d80e8531cf6b4f6f9d406c30d94ebaa95aa90709732583ed308c08d0c" already present on machine

openstack

kubelet

ovn-controller-ovs-lp2wm

Created

Created container: ovs-vswitchd

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: openstack-network-exporter

openstack

kubelet

ovsdbserver-nb-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 2.039s (2.039s including waiting). Image size: 165206333 bytes.

openstack

kubelet

ovn-controller-ovs-lp2wm

Started

Started container ovs-vswitchd

openstack

kubelet

openstack-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

openstack-cell1-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

openstack-cell1-galera-0

Started

Started container galera

default

endpoint-controller

ovn-controller-metrics

FailedToCreateEndpoint

Failed to create endpoint for service openstack/ovn-controller-metrics: endpoints "ovn-controller-metrics" already exists

openstack

replicaset-controller

dnsmasq-dns-679f75d775

SuccessfulCreate

Created pod: dnsmasq-dns-679f75d775-s56hh

openstack

replicaset-controller

dnsmasq-dns-679f75d775

SuccessfulDelete

Deleted pod: dnsmasq-dns-679f75d775-s56hh

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-679f75d775 to 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-679f75d775 to 0 from 1

openstack

daemonset-controller

ovn-controller-metrics

SuccessfulCreate

Created pod: ovn-controller-metrics-5w4cf

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: galera

openstack

replicaset-controller

dnsmasq-dns-79745f7855

SuccessfulCreate

Created pod: dnsmasq-dns-79745f7855-j9vwf

openstack

kubelet

openstack-galera-0

Created

Created container: galera

openstack

kubelet

openstack-galera-0

Started

Started container galera
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

kubelet

dnsmasq-dns-679f75d775-s56hh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

replicaset-controller

dnsmasq-dns-5b55dc5f67

SuccessfulCreate

Created pod: dnsmasq-dns-5b55dc5f67-k2lcw

openstack

replicaset-controller

dnsmasq-dns-79745f7855

SuccessfulDelete

Deleted pod: dnsmasq-dns-79745f7855-j9vwf

openstack

cert-manager-certificates-trigger

swift-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-679f75d775-s56hh

Started

Started container init

openstack

kubelet

dnsmasq-dns-679f75d775-s56hh

Created

Created container: init

openstack

multus

ovn-controller-metrics-5w4cf

AddedInterface

Add eth0 [10.128.0.179/23] from ovn-kubernetes
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

multus

dnsmasq-dns-679f75d775-s56hh

AddedInterface

Add eth0 [10.128.0.178/23] from ovn-kubernetes

openstack

statefulset-controller

ovn-northd

SuccessfulCreate

create Pod ovn-northd-0 in StatefulSet ovn-northd successful
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

kubelet

ovn-controller-metrics-5w4cf

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine

openstack

kubelet

dnsmasq-dns-79745f7855-j9vwf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

metallb-controller

swift-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

multus

dnsmasq-dns-79745f7855-j9vwf

AddedInterface

Add eth0 [10.128.0.180/23] from ovn-kubernetes

openstack

multus

ovn-northd-0

AddedInterface

Add eth0 [10.128.0.181/23] from ovn-kubernetes

openstack

cert-manager-certificates-key-manager

swift-public-svc

Generated

Stored new private key in temporary Secret resource "swift-public-svc-d8jrj"

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-5b55dc5f67-k2lcw

AddedInterface

Add eth0 [10.128.0.182/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

swift-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

swift-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

swift-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

swift-swift-storage-0

Provisioning

External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0"
(x2)

openstack

persistentvolume-controller

swift-swift-storage-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

persistentvolume-controller

swift-swift-storage-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Pod swift-storage-0 in StatefulSet swift-storage successful

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success

openstack

kubelet

ovn-northd-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b5dba6c3776a5c366db4ceedbbb445c1f29b78cd2b0159ff41b9ea063a474a93"

openstack

kubelet

ovn-controller-metrics-5w4cf

Started

Started container openstack-network-exporter

openstack

cert-manager-certificates-issuing

swift-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-79745f7855-j9vwf

Started

Started container init

openstack

kubelet

dnsmasq-dns-79745f7855-j9vwf

Created

Created container: init

openstack

kubelet

ovn-controller-metrics-5w4cf

Created

Created container: openstack-network-exporter

openstack

cert-manager-certificates-key-manager

swift-internal-svc

Generated

Stored new private key in temporary Secret resource "swift-internal-svc-9mk5k"

openstack

cert-manager-certificates-request-manager

swift-internal-svc

Requested

Created new CertificateRequest resource "swift-internal-svc-1"

openstack

cert-manager-certificates-request-manager

swift-public-svc

Requested

Created new CertificateRequest resource "swift-public-svc-1"

openstack

cert-manager-certificates-issuing

swift-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

swift-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Created

Created container: init

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

swift-swift-storage-0

ProvisioningSucceeded

Successfully provisioned volume pvc-0e673ada-f93f-4478-865c-179323f1aba0

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

swift-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

swift-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

swift-public-route

Generated

Stored new private key in temporary Secret resource "swift-public-route-cqtgs"

openstack

cert-manager-certificates-request-manager

swift-public-route

Requested

Created new CertificateRequest resource "swift-public-route-1"

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Started

Started container dnsmasq-dns

openstack

kubelet

ovn-northd-0

Started

Started container ovn-northd

openstack

kubelet

ovn-northd-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovn-northd-0

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

ovn-northd-0

Created

Created container: ovn-northd

openstack

job-controller

swift-ring-rebalance

SuccessfulCreate

Created pod: swift-ring-rebalance-gm5ph

openstack

kubelet

ovn-northd-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b5dba6c3776a5c366db4ceedbbb445c1f29b78cd2b0159ff41b9ea063a474a93" in 1.297s (1.297s including waiting). Image size: 347268557 bytes.

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Created

Created container: dnsmasq-dns

openstack

kubelet

ovn-northd-0

Created

Created container: openstack-network-exporter

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

multus

swift-ring-rebalance-gm5ph

AddedInterface

Add eth0 [10.128.0.184/23] from ovn-kubernetes

openstack

kubelet

swift-ring-rebalance-gm5ph

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123"

openstack

kubelet

swift-ring-rebalance-gm5ph

Started

Started container swift-ring-rebalance

openstack

kubelet

swift-ring-rebalance-gm5ph

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123" in 4.123s (4.123s including waiting). Image size: 500498707 bytes.

openstack

kubelet

swift-ring-rebalance-gm5ph

Created

Created container: swift-ring-rebalance
(x5)

openstack

kubelet

swift-storage-0

FailedMount

MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found

openstack

replicaset-controller

dnsmasq-dns-7c45d57b9c

SuccessfulDelete

Deleted pod: dnsmasq-dns-7c45d57b9c-jf69p

openstack

kubelet

dnsmasq-dns-7c45d57b9c-jf69p

Killing

Stopping container dnsmasq-dns

openstack

job-controller

glance-e923-account-create-update

SuccessfulCreate

Created pod: glance-e923-account-create-update-dswn2

openstack

job-controller

glance-db-create

SuccessfulCreate

Created pod: glance-db-create-jl696

openstack

multus

glance-db-create-jl696

AddedInterface

Add eth0 [10.128.0.185/23] from ovn-kubernetes

openstack

kubelet

glance-e923-account-create-update-dswn2

Started

Started container mariadb-account-create-update

openstack

kubelet

glance-e923-account-create-update-dswn2

Created

Created container: mariadb-account-create-update

openstack

kubelet

glance-db-create-jl696

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

glance-db-create-jl696

Created

Created container: mariadb-database-create

openstack

kubelet

glance-db-create-jl696

Started

Started container mariadb-database-create

openstack

kubelet

glance-e923-account-create-update-dswn2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

glance-e923-account-create-update-dswn2

AddedInterface

Add eth0 [10.128.0.186/23] from ovn-kubernetes

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-pxfms

openstack

kubelet

root-account-create-update-pxfms

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

root-account-create-update-pxfms

Started

Started container mariadb-account-create-update

openstack

kubelet

root-account-create-update-pxfms

Created

Created container: mariadb-account-create-update

openstack

multus

root-account-create-update-pxfms

AddedInterface

Add eth0 [10.128.0.187/23] from ovn-kubernetes

openstack

job-controller

swift-ring-rebalance

Completed

Job completed

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" already present on machine

openstack

multus

swift-storage-0

AddedInterface

Add eth0 [10.128.0.183/23] from ovn-kubernetes

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b"

openstack

job-controller

glance-e923-account-create-update

Completed

Job completed

openstack

kubelet

rabbitmq-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:ea8869786571e9ad2388af89ff4d38d887e32bc9340186598c63fe61a561eb20" already present on machine

openstack

job-controller

keystone-db-create

SuccessfulCreate

Created pod: keystone-db-create-d7rmf

openstack

job-controller

keystone-5d23-account-create-update

SuccessfulCreate

Created pod: keystone-5d23-account-create-update-q2xlr

openstack

job-controller

glance-db-create

Completed

Job completed

openstack

job-controller

placement-db-create

SuccessfulCreate

Created pod: placement-db-create-cjhw4

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: rabbitmq

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container rabbitmq

openstack

multus

keystone-5d23-account-create-update-q2xlr

AddedInterface

Add eth0 [10.128.0.189/23] from ovn-kubernetes

openstack

multus

placement-07e8-account-create-update-4xjm5

AddedInterface

Add eth0 [10.128.0.190/23] from ovn-kubernetes

openstack

multus

keystone-db-create-d7rmf

AddedInterface

Add eth0 [10.128.0.188/23] from ovn-kubernetes

openstack

kubelet

keystone-db-create-d7rmf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

rabbitmq-server-0

Started

Started container rabbitmq

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

job-controller

placement-07e8-account-create-update

SuccessfulCreate

Created pod: placement-07e8-account-create-update-4xjm5

openstack

kubelet

rabbitmq-server-0

Created

Created container: rabbitmq

openstack

kubelet

keystone-5d23-account-create-update-q2xlr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" already present on machine

openstack

kubelet

placement-07e8-account-create-update-4xjm5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

keystone-5d23-account-create-update-q2xlr

Created

Created container: mariadb-account-create-update

openstack

kubelet

placement-07e8-account-create-update-4xjm5

Created

Created container: mariadb-account-create-update

openstack

kubelet

placement-07e8-account-create-update-4xjm5

Started

Started container mariadb-account-create-update

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" in 1.343s (1.343s including waiting). Image size: 445458440 bytes.

openstack

kubelet

swift-storage-0

Created

Created container: account-server

openstack

kubelet

swift-storage-0

Started

Started container account-server

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" already present on machine

openstack

kubelet

keystone-db-create-d7rmf

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-db-create-d7rmf

Started

Started container mariadb-database-create

openstack

kubelet

placement-db-create-cjhw4

Started

Started container mariadb-database-create

openstack

kubelet

swift-storage-0

Created

Created container: account-replicator

openstack

kubelet

swift-storage-0

Started

Started container account-replicator

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:3297815e15f15151c67d850c6e2aeaf6351775e208b51cf5c3ec829c4ca6755b" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: account-auditor

openstack

kubelet

placement-db-create-cjhw4

Created

Created container: mariadb-database-create

openstack

kubelet

placement-db-create-cjhw4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

placement-db-create-cjhw4

AddedInterface

Add eth0 [10.128.0.191/23] from ovn-kubernetes

openstack

kubelet

swift-storage-0

Started

Started container account-auditor

openstack

kubelet

keystone-5d23-account-create-update-q2xlr

Started

Started container mariadb-account-create-update

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:a92ecb870f7cde5bbfe109e99367b4fb913fa3319837a8d7d34dafb1e6547875"

openstack

kubelet

swift-storage-0

Started

Started container account-reaper

openstack

kubelet

swift-storage-0

Created

Created container: account-reaper

openstack

job-controller

glance-db-sync

SuccessfulCreate

Created pod: glance-db-sync-dnhq7

openstack

kubelet

swift-storage-0

Started

Started container container-server

openstack

kubelet

swift-storage-0

Created

Created container: container-server

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:a92ecb870f7cde5bbfe109e99367b4fb913fa3319837a8d7d34dafb1e6547875" in 1.179s (1.179s including waiting). Image size: 445474826 bytes.

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:a92ecb870f7cde5bbfe109e99367b4fb913fa3319837a8d7d34dafb1e6547875" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: container-replicator

openstack

multus

glance-db-sync-dnhq7

AddedInterface

Add eth0 [10.128.0.192/23] from ovn-kubernetes

openstack

kubelet

swift-storage-0

Started

Started container container-replicator

openstack

job-controller

placement-07e8-account-create-update

Completed

Job completed

openstack

job-controller

keystone-db-create

Completed

Job completed

openstack

job-controller

placement-db-create

Completed

Job completed

openstack

kubelet

glance-db-sync-dnhq7

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416"

openstack

multus

glance-db-sync-dnhq7

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

job-controller

keystone-5d23-account-create-update

Completed

Job completed

openstack

kubelet

ovn-controller-hjmv9

Unhealthy

Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status

openstack

job-controller

ovn-controller-hjmv9-config

SuccessfulCreate

Created pod: ovn-controller-hjmv9-config-wk789

openstack

rabbitmq-server-0/rabbitmq_peer_discovery

pod/rabbitmq-server-0

Created

Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered

openstack

multus

ovn-controller-hjmv9-config-wk789

AddedInterface

Add eth0 [10.128.0.193/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-hjmv9-config-wk789

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417" already present on machine

openstack

replicaset-controller

dnsmasq-dns-84556f859

SuccessfulCreate

Created pod: dnsmasq-dns-84556f859-6lpst

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-klrwt

openstack

rabbitmq-cell1-server-0/rabbitmq_peer_discovery

pod/rabbitmq-cell1-server-0

Created

Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered

openstack

kubelet

ovn-controller-hjmv9-config-wk789

Started

Started container ovn-config

openstack

kubelet

ovn-controller-hjmv9-config-wk789

Created

Created container: ovn-config

openstack

multus

dnsmasq-dns-84556f859-6lpst

AddedInterface

Add eth0 [10.128.0.194/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-klrwt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

root-account-create-update-klrwt

Created

Created container: mariadb-account-create-update

openstack

kubelet

root-account-create-update-klrwt

Started

Started container mariadb-account-create-update

openstack

multus

root-account-create-update-klrwt

AddedInterface

Add eth0 [10.128.0.195/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Created

Created container: init

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Started

Started container init

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

job-controller

ovn-controller-hjmv9-config

SuccessfulCreate

Created pod: ovn-controller-hjmv9-config-rhxl6

openstack

job-controller

ovn-controller-hjmv9-config

Completed

Job completed

openstack

multus

ovn-controller-hjmv9-config-rhxl6

AddedInterface

Add eth0 [10.128.0.196/23] from ovn-kubernetes

openstack

metallb-speaker

rabbitmq

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

neutron-db-create

SuccessfulCreate

Created pod: neutron-db-create-x5qn7

openstack

job-controller

cinder-db-create

SuccessfulCreate

Created pod: cinder-db-create-vptkz

openstack

job-controller

keystone-db-sync

SuccessfulCreate

Created pod: keystone-db-sync-4z2pz

openstack

job-controller

cinder-ed6f-account-create-update

SuccessfulCreate

Created pod: cinder-ed6f-account-create-update-kn7d6

openstack

metallb-speaker

rabbitmq-cell1

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

neutron-7051-account-create-update

SuccessfulCreate

Created pod: neutron-7051-account-create-update-2j7gx

openstack

replicaset-controller

dnsmasq-dns-5b55dc5f67

SuccessfulDelete

Deleted pod: dnsmasq-dns-5b55dc5f67-k2lcw

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Killing

Stopping container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-5b55dc5f67-k2lcw

Unhealthy

Readiness probe failed: dial tcp 10.128.0.182:5353: connect: connection refused

openstack

kubelet

ovn-controller-hjmv9-config-rhxl6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:5a01d6902fcff84f31d264784a24433f1266e51e84e70ca3796953855fdec417" already present on machine

openstack

kubelet

glance-db-sync-dnhq7

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" in 18.681s (18.681s including waiting). Image size: 983253362 bytes.

openstack

multus

cinder-db-create-vptkz

AddedInterface

Add eth0 [10.128.0.197/23] from ovn-kubernetes

openstack

kubelet

cinder-db-create-vptkz

Started

Started container mariadb-database-create

openstack

kubelet

neutron-7051-account-create-update-2j7gx

Started

Started container mariadb-account-create-update

openstack

kubelet

keystone-db-sync-4z2pz

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8"

openstack

multus

neutron-db-create-x5qn7

AddedInterface

Add eth0 [10.128.0.200/23] from ovn-kubernetes

openstack

multus

keystone-db-sync-4z2pz

AddedInterface

Add eth0 [10.128.0.199/23] from ovn-kubernetes

openstack

kubelet

cinder-ed6f-account-create-update-kn7d6

Started

Started container mariadb-account-create-update

openstack

kubelet

neutron-7051-account-create-update-2j7gx

Created

Created container: mariadb-account-create-update

openstack

kubelet

cinder-ed6f-account-create-update-kn7d6

Created

Created container: mariadb-account-create-update

openstack

kubelet

cinder-ed6f-account-create-update-kn7d6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

neutron-7051-account-create-update-2j7gx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

neutron-7051-account-create-update-2j7gx

AddedInterface

Add eth0 [10.128.0.201/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-hjmv9-config-rhxl6

Started

Started container ovn-config

openstack

multus

cinder-ed6f-account-create-update-kn7d6

AddedInterface

Add eth0 [10.128.0.198/23] from ovn-kubernetes

openstack

kubelet

glance-db-sync-dnhq7

Started

Started container glance-db-sync

openstack

kubelet

glance-db-sync-dnhq7

Created

Created container: glance-db-sync

openstack

kubelet

cinder-db-create-vptkz

Created

Created container: mariadb-database-create

openstack

kubelet

cinder-db-create-vptkz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

kubelet

neutron-db-create-x5qn7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

ovn-controller-hjmv9-config-rhxl6

Created

Created container: ovn-config

openstack

kubelet

neutron-db-create-x5qn7

Created

Created container: mariadb-database-create

openstack

kubelet

neutron-db-create-x5qn7

Started

Started container mariadb-database-create

openstack

job-controller

ovn-controller-hjmv9-config

Completed

Job completed

openstack

kubelet

keystone-db-sync-4z2pz

Started

Started container keystone-db-sync

openstack

kubelet

keystone-db-sync-4z2pz

Created

Created container: keystone-db-sync

openstack

kubelet

keystone-db-sync-4z2pz

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" in 5.622s (5.622s including waiting). Image size: 520429064 bytes.

openstack

job-controller

neutron-db-create

Completed

Job completed

openstack

job-controller

cinder-db-create

Completed

Job completed

openstack

job-controller

cinder-ed6f-account-create-update

Completed

Job completed

openstack

job-controller

neutron-7051-account-create-update

Completed

Job completed

openstack

job-controller

keystone-db-sync

Completed

Job completed

openstack

cert-manager-certificates-request-manager

keystone-internal-svc

Requested

Created new CertificateRequest resource "keystone-internal-svc-1"

openstack

cert-manager-certificates-trigger

keystone-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

job-controller

placement-db-sync

SuccessfulCreate

Created pod: placement-db-sync-njvpx

openstack

metallb-controller

placement-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificaterequests-issuer-vault

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

keystone-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

keystone-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

keystone-internal-svc

Generated

Stored new private key in temporary Secret resource "keystone-internal-svc-t4dtz"
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificates-issuing

keystone-internal-svc

Issuing

The certificate has been successfully issued

openstack

metallb-controller

keystone-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

default

endpoint-controller

placement-internal

FailedToCreateEndpoint

Failed to create endpoint for service openstack/placement-internal: endpoints "placement-internal" already exists

openstack

job-controller

ironic-db-create

SuccessfulCreate

Created pod: ironic-db-create-8l585

openstack

job-controller

neutron-db-sync

SuccessfulCreate

Created pod: neutron-db-sync-k6pnr

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-bw6l8

openstack

job-controller

ironic-ecce-account-create-update

SuccessfulCreate

Created pod: ironic-ecce-account-create-update-2pvjj

openstack

job-controller

cinder-6ac23-db-sync

SuccessfulCreate

Created pod: cinder-6ac23-db-sync-mhchn

openstack

replicaset-controller

dnsmasq-dns-597f6b8457

SuccessfulCreate

Created pod: dnsmasq-dns-597f6b8457-gn4tl

openstack

replicaset-controller

dnsmasq-dns-597f6b8457

SuccessfulDelete

Deleted pod: dnsmasq-dns-597f6b8457-gn4tl

openstack

replicaset-controller

dnsmasq-dns-64b4994945

SuccessfulCreate

Created pod: dnsmasq-dns-64b4994945-klvx7

openstack

kubelet

neutron-db-sync-k6pnr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-597f6b8457-gn4tl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

keystone-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

keystone-bootstrap-bw6l8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

cert-manager-certificates-key-manager

keystone-public-svc

Generated

Stored new private key in temporary Secret resource "keystone-public-svc-tf8dh"

openstack

cert-manager-certificates-request-manager

keystone-public-svc

Requested

Created new CertificateRequest resource "keystone-public-svc-1"

openstack

cert-manager-certificates-issuing

keystone-public-svc

Issuing

The certificate has been successfully issued

openstack

job-controller

glance-db-sync

Completed

Job completed

openstack

multus

ironic-db-create-8l585

AddedInterface

Add eth0 [10.128.0.204/23] from ovn-kubernetes

openstack

multus

keystone-bootstrap-bw6l8

AddedInterface

Add eth0 [10.128.0.202/23] from ovn-kubernetes

openstack

statefulset-controller

glance-8705a-default-external-api

SuccessfulCreate

create Claim glance-glance-8705a-default-external-api-0 Pod glance-8705a-default-external-api-0 in StatefulSet glance-8705a-default-external-api success

openstack

persistentvolume-controller

glance-glance-8705a-default-external-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

kubelet

ironic-db-create-8l585

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

ironic-ecce-account-create-update-2pvjj

AddedInterface

Add eth0 [10.128.0.206/23] from ovn-kubernetes

openstack

kubelet

ironic-ecce-account-create-update-2pvjj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

neutron-db-sync-k6pnr

AddedInterface

Add eth0 [10.128.0.205/23] from ovn-kubernetes

openstack

multus

cinder-6ac23-db-sync-mhchn

AddedInterface

Add eth0 [10.128.0.207/23] from ovn-kubernetes

openstack

multus

dnsmasq-dns-597f6b8457-gn4tl

AddedInterface

Add eth0 [10.128.0.203/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-approver

keystone-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

dnsmasq-dns-64b4994945-klvx7

Started

Started container init

openstack

kubelet

dnsmasq-dns-597f6b8457-gn4tl

Started

Started container init

openstack

kubelet

dnsmasq-dns-597f6b8457-gn4tl

Created

Created container: init

openstack

cert-manager-certificates-trigger

keystone-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

multus

dnsmasq-dns-64b4994945-klvx7

AddedInterface

Add eth0 [10.128.0.208/23] from ovn-kubernetes

openstack

cert-manager-certificates-issuing

keystone-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

cinder-6ac23-db-sync-mhchn

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead"

openstack

kubelet

dnsmasq-dns-64b4994945-klvx7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-ecce-account-create-update-2pvjj

Started

Started container mariadb-account-create-update

openstack

kubelet

dnsmasq-dns-64b4994945-klvx7

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-db-sync-k6pnr

Created

Created container: neutron-db-sync
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

kubelet

neutron-db-sync-k6pnr

Started

Started container neutron-db-sync

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-ecce-account-create-update-2pvjj

Created

Created container: mariadb-account-create-update

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

keystone-public-route

Generated

Stored new private key in temporary Secret resource "keystone-public-route-btjg4"

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-db-create-8l585

Started

Started container mariadb-database-create

openstack

kubelet

ironic-db-create-8l585

Created

Created container: mariadb-database-create

openstack

replicaset-controller

dnsmasq-dns-64b4994945

SuccessfulDelete

Deleted pod: dnsmasq-dns-64b4994945-klvx7

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

glance-glance-8705a-default-external-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-8705a-default-external-api-0"

openstack

replicaset-controller

dnsmasq-dns-7f74bd995c

SuccessfulCreate

Created pod: dnsmasq-dns-7f74bd995c-jflbg

openstack

statefulset-controller

glance-8705a-default-internal-api

SuccessfulCreate

create Claim glance-glance-8705a-default-internal-api-0 Pod glance-8705a-default-internal-api-0 in StatefulSet glance-8705a-default-internal-api success

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

glance-glance-8705a-default-internal-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificates-request-manager

keystone-public-route

Requested

Created new CertificateRequest resource "keystone-public-route-1"

openstack

kubelet

keystone-bootstrap-bw6l8

Created

Created container: keystone-bootstrap

openstack

kubelet

keystone-bootstrap-bw6l8

Started

Started container keystone-bootstrap

openstack

kubelet

placement-db-sync-njvpx

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0"

openstack

multus

placement-db-sync-njvpx

AddedInterface

Add eth0 [10.128.0.209/23] from ovn-kubernetes

openstack

metallb-controller

glance-default-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-trigger

placement-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

glance-glance-8705a-default-external-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

glance-glance-8705a-default-internal-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-8705a-default-internal-api-0"

openstack

cert-manager-certificates-trigger

placement-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

placement-internal-svc

Generated

Stored new private key in temporary Secret resource "placement-internal-svc-nsqpg"
(x2)

openstack

persistentvolume-controller

glance-glance-8705a-default-external-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-request-manager

placement-internal-svc

Requested

Created new CertificateRequest resource "placement-internal-svc-1"

openstack

cert-manager-certificates-issuing

placement-internal-svc

Issuing

The certificate has been successfully issued

openstack

multus

dnsmasq-dns-7f74bd995c-jflbg

AddedInterface

Add eth0 [10.128.0.210/23] from ovn-kubernetes
(x3)

openstack

persistentvolume-controller

glance-glance-8705a-default-internal-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

kubelet

dnsmasq-dns-7f74bd995c-jflbg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7f74bd995c-jflbg

Started

Started container init

openstack

kubelet

dnsmasq-dns-7f74bd995c-jflbg

Created

Created container: init

openstack

cert-manager-certificaterequests-approver

placement-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-trigger

placement-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

placement-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

glance-glance-8705a-default-internal-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8

openstack

cert-manager-certificates-key-manager

placement-public-svc

Generated

Stored new private key in temporary Secret resource "placement-public-svc-sq6wq"

openstack

cert-manager-certificates-request-manager

placement-public-svc

Requested

Created new CertificateRequest resource "placement-public-svc-1"

openstack

cert-manager-certificates-key-manager

placement-public-route

Generated

Stored new private key in temporary Secret resource "placement-public-route-grcmj"

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

dnsmasq-dns-7f74bd995c-jflbg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-request-manager

placement-public-route

Requested

Created new CertificateRequest resource "placement-public-route-1"

openstack

kubelet

dnsmasq-dns-7f74bd995c-jflbg

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7f74bd995c-jflbg

Started

Started container dnsmasq-dns

openstack

cert-manager-certificates-issuing

placement-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

glance-default-internal-svc

Requested

Created new CertificateRequest resource "glance-default-internal-svc-1"

openstack

cert-manager-certificates-key-manager

glance-default-internal-svc

Generated

Stored new private key in temporary Secret resource "glance-default-internal-svc-vjxzp"

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

glance-default-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-trigger

glance-default-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

glance-default-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

glance-default-public-svc

Generated

Stored new private key in temporary Secret resource "glance-default-public-svc-2jsq2"

openstack

cert-manager-certificates-request-manager

glance-default-public-svc

Requested

Created new CertificateRequest resource "glance-default-public-svc-1"

openstack

cert-manager-certificates-issuing

glance-default-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

placement-db-sync-njvpx

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" in 5.337s (5.337s including waiting). Image size: 472994007 bytes.

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

placement-db-sync-njvpx

Created

Created container: placement-db-sync

openstack

kubelet

placement-db-sync-njvpx

Started

Started container placement-db-sync

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

glance-default-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

glance-8705a-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

cert-manager-certificates-issuing

glance-default-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

glance-8705a-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-8705a-default-external-api-0

Created

Created container: glance-log

openstack

multus

glance-8705a-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.212/23] from ovn-kubernetes

openstack

multus

glance-8705a-default-external-api-0

AddedInterface

Add eth0 [10.128.0.211/23] from ovn-kubernetes

openstack

job-controller

ironic-db-create

Completed

Job completed

openstack

kubelet

glance-8705a-default-external-api-0

Started

Started container glance-log

openstack

multus

glance-8705a-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

cert-manager-certificates-trigger

glance-default-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

glance-8705a-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

job-controller

ironic-ecce-account-create-update

Completed

Job completed

openstack

kubelet

glance-8705a-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-8705a-default-external-api-0

Started

Started container glance-httpd

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

glance-8705a-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

glance-8705a-default-internal-api-0

Created

Created container: glance-log

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

glance-default-public-route

Requested

Created new CertificateRequest resource "glance-default-public-route-1"

openstack

kubelet

glance-8705a-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

cert-manager-certificaterequests-approver

glance-default-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

glance-8705a-default-external-api-0

Created

Created container: glance-httpd

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

glance-default-public-route

Generated

Stored new private key in temporary Secret resource "glance-default-public-route-jbgtv"

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

glance-default-public-route

Issuing

The certificate has been successfully issued

openstack

job-controller

ironic-db-sync

SuccessfulCreate

Created pod: ironic-db-sync-jzr8b

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

kubelet

glance-8705a-default-internal-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-8705a-default-internal-api-0

Started

Started container glance-httpd

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-wdcb6

openstack

kubelet

glance-8705a-default-external-api-0

Killing

Stopping container glance-log

openstack

multus

keystone-bootstrap-wdcb6

AddedInterface

Add eth0 [10.128.0.213/23] from ovn-kubernetes

openstack

kubelet

glance-8705a-default-internal-api-0

Killing

Stopping container glance-log

openstack

kubelet

glance-8705a-default-internal-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-8705a-default-external-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

keystone-bootstrap-wdcb6

Started

Started container keystone-bootstrap

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-84556f859

SuccessfulDelete

Deleted pod: dnsmasq-dns-84556f859-6lpst

openstack

kubelet

keystone-bootstrap-wdcb6

Created

Created container: keystone-bootstrap

openstack

kubelet

keystone-bootstrap-wdcb6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

multus

ironic-db-sync-jzr8b

AddedInterface

Add eth0 [10.128.0.214/23] from ovn-kubernetes

openstack

kubelet

ironic-db-sync-jzr8b

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556"

openstack

replicaset-controller

keystone-8f98fb65f

SuccessfulCreate

Created pod: keystone-8f98fb65f-btxw6

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

deployment-controller

keystone

ScalingReplicaSet

Scaled up replica set keystone-8f98fb65f to 1

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-f597cf46d to 1

openstack

job-controller

placement-db-sync

Completed

Job completed

openstack

replicaset-controller

placement-f597cf46d

SuccessfulCreate

Created pod: placement-f597cf46d-llslv
(x2)

openstack

kubelet

dnsmasq-dns-84556f859-6lpst

Unhealthy

Readiness probe failed: dial tcp 10.128.0.194:5353: i/o timeout

openstack

kubelet

ironic-db-sync-jzr8b

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" in 16.536s (16.536s including waiting). Image size: 599312972 bytes.

openstack

kubelet

cinder-6ac23-db-sync-mhchn

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" in 26.787s (26.787s including waiting). Image size: 1161440551 bytes.

openstack

kubelet

glance-8705a-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

multus

glance-8705a-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.216/23] from ovn-kubernetes

openstack

multus

placement-f597cf46d-llslv

AddedInterface

Add eth0 [10.128.0.218/23] from ovn-kubernetes

openstack

kubelet

placement-f597cf46d-llslv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" already present on machine

openstack

kubelet

placement-f597cf46d-llslv

Created

Created container: placement-log

openstack

kubelet

placement-f597cf46d-llslv

Started

Started container placement-log

openstack

kubelet

placement-f597cf46d-llslv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:92021b9fa077b6fc021f2910d238184f2a7dacc0a564b5da13f4b0fb68318cf0" already present on machine

openstack

kubelet

ironic-db-sync-jzr8b

Created

Created container: init

openstack

kubelet

cinder-6ac23-db-sync-mhchn

Started

Started container cinder-6ac23-db-sync

openstack

kubelet

ironic-db-sync-jzr8b

Started

Started container init

openstack

kubelet

keystone-8f98fb65f-btxw6

Created

Created container: keystone-api

openstack

kubelet

keystone-8f98fb65f-btxw6

Started

Started container keystone-api

openstack

multus

glance-8705a-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

multus

glance-8705a-default-external-api-0

AddedInterface

Add eth0 [10.128.0.215/23] from ovn-kubernetes

openstack

multus

glance-8705a-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

cinder-6ac23-db-sync-mhchn

Created

Created container: cinder-6ac23-db-sync

openstack

kubelet

keystone-8f98fb65f-btxw6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

multus

keystone-8f98fb65f-btxw6

AddedInterface

Add eth0 [10.128.0.217/23] from ovn-kubernetes

openstack

kubelet

glance-8705a-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-8705a-default-internal-api-0

Started

Started container glance-httpd

openstack

kubelet

glance-8705a-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine
(x2)

openstack

kubelet

ironic-db-sync-jzr8b

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" already present on machine

openstack

kubelet

glance-8705a-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

glance-8705a-default-internal-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-8705a-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

placement-f597cf46d-llslv

Created

Created container: placement-api

openstack

kubelet

glance-8705a-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-8705a-default-external-api-0

Started

Started container glance-httpd

openstack

kubelet

glance-8705a-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-8705a-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-8705a-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

placement-f597cf46d-llslv

Started

Started container placement-api

openstack

kubelet

ironic-db-sync-jzr8b

Failed

Error: container create failed: mount `/var/lib/kubelet/pods/a3a705bf-9636-4410-a44a-6ff6907d4179/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory

openstack

kubelet

ironic-db-sync-jzr8b

Started

Started container ironic-db-sync

openstack

kubelet

ironic-db-sync-jzr8b

Created

Created container: ironic-db-sync

openstack

job-controller

neutron-db-sync

Completed

Job completed

openstack

replicaset-controller

neutron-55455d5d8d

SuccessfulCreate

Created pod: neutron-55455d5d8d-zzwzz
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificates-trigger

neutron-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

replicaset-controller

dnsmasq-dns-674dc645f

SuccessfulCreate

Created pod: dnsmasq-dns-674dc645f-b7fhr

openstack

metallb-controller

neutron-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-55455d5d8d to 1
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-674dc645f-b7fhr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

multus

neutron-55455d5d8d-zzwzz

AddedInterface

Add internalapi [172.17.0.32/24] from openstack/internalapi

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-674dc645f-b7fhr

AddedInterface

Add eth0 [10.128.0.219/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-vault

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

neutron-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

neutron-55455d5d8d-zzwzz

Created

Created container: neutron-api

openstack

multus

neutron-55455d5d8d-zzwzz

AddedInterface

Add eth0 [10.128.0.220/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-674dc645f-b7fhr

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-acme

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

neutron-internal-svc

Generated

Stored new private key in temporary Secret resource "neutron-internal-svc-d8zk4"

openstack

cert-manager-certificates-trigger

neutron-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-674dc645f-b7fhr

Created

Created container: init

openstack

cert-manager-certificates-request-manager

neutron-internal-svc

Requested

Created new CertificateRequest resource "neutron-internal-svc-1"

openstack

kubelet

neutron-55455d5d8d-zzwzz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-674dc645f-b7fhr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-approver

neutron-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

neutron-55455d5d8d-zzwzz

Started

Started container neutron-httpd

openstack

cert-manager-certificates-request-manager

neutron-public-route

Requested

Created new CertificateRequest resource "neutron-public-route-1"
(x25)

openstack

metallb-speaker

dnsmasq-dns

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

neutron-55455d5d8d-zzwzz

Started

Started container neutron-api

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

neutron-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

neutron-55455d5d8d-zzwzz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-55455d5d8d-zzwzz

Created

Created container: neutron-httpd

openstack

cert-manager-certificates-key-manager

neutron-public-route

Generated

Stored new private key in temporary Secret resource "neutron-public-route-ptrwp"

openstack

kubelet

dnsmasq-dns-674dc645f-b7fhr

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

neutron-public-svc

Generated

Stored new private key in temporary Secret resource "neutron-public-svc-7bb5p"

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-674dc645f-b7fhr

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

neutron-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

neutron-public-svc

Requested

Created new CertificateRequest resource "neutron-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

neutron-public-route

Issuing

The certificate has been successfully issued

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-6b46dbc6bf to 1

openstack

replicaset-controller

neutron-6b46dbc6bf

SuccessfulCreate

Created pod: neutron-6b46dbc6bf-ngrn9

openstack

kubelet

neutron-6b46dbc6bf-ngrn9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

multus

neutron-6b46dbc6bf-ngrn9

AddedInterface

Add internalapi [172.17.0.33/24] from openstack/internalapi

openstack

multus

neutron-6b46dbc6bf-ngrn9

AddedInterface

Add eth0 [10.128.0.221/23] from ovn-kubernetes

openstack

kubelet

neutron-6b46dbc6bf-ngrn9

Started

Started container neutron-httpd

openstack

kubelet

neutron-6b46dbc6bf-ngrn9

Created

Created container: neutron-api

openstack

metallb-controller

cinder-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

kubelet

neutron-6b46dbc6bf-ngrn9

Created

Created container: neutron-httpd

openstack

kubelet

neutron-6b46dbc6bf-ngrn9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

job-controller

cinder-6ac23-db-sync

Completed

Job completed

openstack

kubelet

neutron-6b46dbc6bf-ngrn9

Started

Started container neutron-api

openstack

cert-manager-certificates-request-manager

cinder-internal-svc

Requested

Created new CertificateRequest resource "cinder-internal-svc-1"

openstack

cert-manager-certificaterequests-issuer-acme

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-674dc645f

SuccessfulDelete

Deleted pod: dnsmasq-dns-674dc645f-b7fhr

openstack

cert-manager-certificates-issuing

cinder-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-key-manager

cinder-internal-svc

Generated

Stored new private key in temporary Secret resource "cinder-internal-svc-kqdvj"

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-6bc5ccc685

SuccessfulCreate

Created pod: dnsmasq-dns-6bc5ccc685-kl2f6

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-674dc645f-b7fhr

Killing

Stopping container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

cinder-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

cinder-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

cinder-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

cinder-public-svc

Generated

Stored new private key in temporary Secret resource "cinder-public-svc-4rkb9"

openstack

multus

cinder-6ac23-api-0

AddedInterface

Add eth0 [10.128.0.226/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6bc5ccc685-kl2f6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-approver

cinder-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

cinder-6ac23-scheduler-0

AddedInterface

Add eth0 [10.128.0.222/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6bc5ccc685-kl2f6

Started

Started container init

openstack

kubelet

cinder-6ac23-backup-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7"

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

cinder-6ac23-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

multus

cinder-6ac23-backup-0

AddedInterface

Add eth0 [10.128.0.225/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6bc5ccc685-kl2f6

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-6ac23-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4"

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d"

openstack

cert-manager-certificates-issuing

cinder-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-public-svc

Requested

Created new CertificateRequest resource "cinder-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

cinder-6ac23-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

multus

dnsmasq-dns-6bc5ccc685-kl2f6

AddedInterface

Add eth0 [10.128.0.224/23] from ovn-kubernetes

openstack

multus

cinder-6ac23-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.223/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-6ac23-api-0

Created

Created container: cinder-6ac23-api-log

openstack

kubelet

cinder-6ac23-api-0

Started

Started container cinder-6ac23-api-log

openstack

kubelet

cinder-6ac23-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" in 1.145s (1.145s including waiting). Image size: 1084233182 bytes.

openstack

kubelet

cinder-6ac23-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" in 795ms (795ms including waiting). Image size: 1083291295 bytes.

openstack

statefulset-controller

cinder-6ac23-api

SuccessfulDelete

delete Pod cinder-6ac23-api-0 in StatefulSet cinder-6ac23-api successful

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-6ac23-backup-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" in 1.131s (1.131s including waiting). Image size: 1083296539 bytes.

openstack

cert-manager-certificates-issuing

cinder-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

cinder-public-route

Requested

Created new CertificateRequest resource "cinder-public-route-1"

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6bc5ccc685-kl2f6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

cinder-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

cinder-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

cinder-public-route

Generated

Stored new private key in temporary Secret resource "cinder-public-route-66gmh"

openstack

kubelet

cinder-6ac23-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-6ac23-api-0

Started

Started container cinder-api

openstack

kubelet

dnsmasq-dns-6bc5ccc685-kl2f6

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6bc5ccc685-kl2f6

Started

Started container dnsmasq-dns

openstack

kubelet

cinder-6ac23-api-0

Created

Created container: cinder-api

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-6ac23-scheduler-0

Started

Started container probe

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

cinder-6ac23-scheduler-0

Created

Created container: probe

openstack

kubelet

cinder-6ac23-backup-0

Started

Started container probe

openstack

kubelet

cinder-6ac23-backup-0

Created

Created container: probe

openstack

kubelet

cinder-6ac23-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" already present on machine

openstack

kubelet

cinder-6ac23-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" already present on machine

openstack

kubelet

cinder-6ac23-backup-0

Started

Started container cinder-backup

openstack

kubelet

cinder-6ac23-backup-0

Created

Created container: cinder-backup

openstack

kubelet

cinder-6ac23-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" already present on machine

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-6ac23-api-0

Killing

Stopping container cinder-6ac23-api-log

openstack

kubelet

cinder-6ac23-api-0

Killing

Stopping container cinder-api
(x2)

openstack

statefulset-controller

cinder-6ac23-api

SuccessfulCreate

create Pod cinder-6ac23-api-0 in StatefulSet cinder-6ac23-api successful

openstack

multus

cinder-6ac23-api-0

AddedInterface

Add eth0 [10.128.0.227/23] from ovn-kubernetes

openstack

kubelet

cinder-6ac23-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

job-controller

ironic-db-sync

Completed

Job completed
(x16)

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

(combined from similar events): Scaled down replica set dnsmasq-dns-6bc5ccc685 to 0 from 1

openstack

job-controller

ironic-inspector-db-create

SuccessfulCreate

Created pod: ironic-inspector-db-create-8kz9s

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-75c678c459 to 1

openstack

replicaset-controller

ironic-75c678c459

SuccessfulCreate

Created pod: ironic-75c678c459-9mmbb
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

deployment-controller

ironic-neutron-agent

ScalingReplicaSet

Scaled up replica set ironic-neutron-agent-7d8f6784f6 to 1

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful

openstack

replicaset-controller

ironic-neutron-agent-7d8f6784f6

SuccessfulCreate

Created pod: ironic-neutron-agent-7d8f6784f6-dqjdm

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success

openstack

job-controller

ironic-inspector-2bdc-account-create-update

SuccessfulCreate

Created pod: ironic-inspector-2bdc-account-create-update-5cgdd

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

replicaset-controller

dnsmasq-dns-6bc5ccc685

SuccessfulDelete

Deleted pod: dnsmasq-dns-6bc5ccc685-kl2f6
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

kubelet

dnsmasq-dns-6bc5ccc685-kl2f6

Killing

Stopping container dnsmasq-dns
(x2)

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

var-lib-ironic-ironic-conductor-0

Provisioning

External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0"

openstack

replicaset-controller

dnsmasq-dns-6b45666449

SuccessfulCreate

Created pod: dnsmasq-dns-6b45666449-v77b5

openstack

metallb-controller

ironic-internal

IPAllocated

Assigned IP ["192.168.122.80"]

openstack

cert-manager-certificates-trigger

ironic-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

topolvm.io_lvms-operator-7bbcf6487b-nkgxz_2f08857c-1048-4147-8f24-6b01e0cca049

var-lib-ironic-ironic-conductor-0

ProvisioningSucceeded

Successfully provisioned volume pvc-4941f0bb-aa69-433a-901d-c8b9ad538b67

openstack

kubelet

cinder-6ac23-api-0

Started

Started container cinder-6ac23-api-log

openstack

kubelet

ironic-neutron-agent-7d8f6784f6-dqjdm

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:a3f5b519c7fc33e9f66fe553a7bc5cce51c3ff01223190cfa93bb75149a1dfcc"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

ironic-inspector-2bdc-account-create-update-5cgdd

AddedInterface

Add eth0 [10.128.0.230/23] from ovn-kubernetes

openstack

multus

ironic-neutron-agent-7d8f6784f6-dqjdm

AddedInterface

Add eth0 [10.128.0.229/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ironic-public-svc

Requested

Created new CertificateRequest resource "ironic-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-6ac23-api-0

Created

Created container: cinder-6ac23-api-log

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ironic-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ironic-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-public-svc-mskkq"

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-db-create-8kz9s

Started

Started container mariadb-database-create

openstack

kubelet

ironic-inspector-db-create-8kz9s

Created

Created container: mariadb-database-create

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ironic-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

ironic-inspector-db-create-8kz9s

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

cert-manager-certificaterequests-approver

ironic-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

cinder-6ac23-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:76b63ab76a2865b38e0fc61c5800b33683b6bd2f6b77eb1c791aad230bbebead" already present on machine

openstack

multus

ironic-inspector-db-create-8kz9s

AddedInterface

Add eth0 [10.128.0.228/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

ironic-internal-svc

Requested

Created new CertificateRequest resource "ironic-internal-svc-1"

openstack

cert-manager-certificates-key-manager

ironic-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-internal-svc-5zvcl"

openstack

cert-manager-certificates-key-manager

ironic-public-route

Generated

Stored new private key in temporary Secret resource "ironic-public-route-km2ww"

openstack

kubelet

cinder-6ac23-api-0

Created

Created container: cinder-api

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-75c678c459-9mmbb

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f"

openstack

cert-manager-certificates-issuing

ironic-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-6b45666449-v77b5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-trigger

ironic-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-6b45666449-v77b5

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-2bdc-account-create-update-5cgdd

Created

Created container: mariadb-account-create-update

openstack

kubelet

ironic-inspector-2bdc-account-create-update-5cgdd

Started

Started container mariadb-account-create-update

openstack

multus

ironic-75c678c459-9mmbb

AddedInterface

Add eth0 [10.128.0.232/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

ironic-public-route

Requested

Created new CertificateRequest resource "ironic-public-route-1"

openstack

kubelet

ironic-inspector-2bdc-account-create-update-5cgdd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

dnsmasq-dns-6b45666449-v77b5

AddedInterface

Add eth0 [10.128.0.231/23] from ovn-kubernetes

openstack

kubelet

cinder-6ac23-api-0

Started

Started container cinder-api

openstack

cert-manager-certificates-issuing

ironic-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-6b45666449-v77b5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6b45666449-v77b5

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-6ac23-scheduler-0

Killing

Stopping container probe

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-85b75c94bc to 1

openstack

statefulset-controller

cinder-6ac23-backup

SuccessfulDelete

delete Pod cinder-6ac23-backup-0 in StatefulSet cinder-6ac23-backup successful

openstack

kubelet

dnsmasq-dns-6b45666449-v77b5

Created

Created container: dnsmasq-dns

openstack

statefulset-controller

cinder-6ac23-scheduler

SuccessfulDelete

delete Pod cinder-6ac23-scheduler-0 in StatefulSet cinder-6ac23-scheduler successful

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Killing

Stopping container probe

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Killing

Stopping container cinder-volume

openstack

kubelet

cinder-6ac23-backup-0

Killing

Stopping container probe

openstack

replicaset-controller

ironic-85b75c94bc

SuccessfulCreate

Created pod: ironic-85b75c94bc-pp6mc

openstack

kubelet

dnsmasq-dns-6b45666449-v77b5

Started

Started container dnsmasq-dns

openstack

kubelet

cinder-6ac23-backup-0

Killing

Stopping container cinder-backup

openstack

statefulset-controller

cinder-6ac23-volume-lvm-iscsi

SuccessfulDelete

delete Pod cinder-6ac23-volume-lvm-iscsi-0 in StatefulSet cinder-6ac23-volume-lvm-iscsi successful

openstack

kubelet

cinder-6ac23-scheduler-0

Killing

Stopping container cinder-scheduler

openstack

kubelet

ironic-neutron-agent-7d8f6784f6-dqjdm

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:a3f5b519c7fc33e9f66fe553a7bc5cce51c3ff01223190cfa93bb75149a1dfcc" in 3.071s (3.071s including waiting). Image size: 655390550 bytes.

openstack

kubelet

ironic-75c678c459-9mmbb

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" in 2.714s (2.714s including waiting). Image size: 536433442 bytes.

openstack

kubelet

ironic-75c678c459-9mmbb

Created

Created container: init

openstack

kubelet

ironic-75c678c459-9mmbb

Started

Started container init

openstack

multus

ironic-85b75c94bc-pp6mc

AddedInterface

Add eth0 [10.128.0.234/23] from ovn-kubernetes

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" already present on machine

openstack

kubelet

ironic-85b75c94bc-pp6mc

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

multus

ironic-conductor-0

AddedInterface

Add eth0 [10.128.0.233/23] from ovn-kubernetes

openstack

multus

ironic-conductor-0

AddedInterface

Add ironic [172.20.1.31/24] from openstack/ironic

openstack

job-controller

ironic-inspector-db-create

Completed

Job completed
(x2)

openstack

statefulset-controller

cinder-6ac23-backup

SuccessfulCreate

create Pod cinder-6ac23-backup-0 in StatefulSet cinder-6ac23-backup successful

openstack

kubelet

ironic-conductor-0

Started

Started container init

openstack

kubelet

ironic-85b75c94bc-pp6mc

Created

Created container: init

openstack

job-controller

ironic-inspector-2bdc-account-create-update

Completed

Job completed

openstack

metallb-speaker

keystone-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

ironic-conductor-0

Created

Created container: init
(x2)

openstack

statefulset-controller

cinder-6ac23-volume-lvm-iscsi

SuccessfulCreate

create Pod cinder-6ac23-volume-lvm-iscsi-0 in StatefulSet cinder-6ac23-volume-lvm-iscsi successful

openstack

kubelet

ironic-85b75c94bc-pp6mc

Started

Started container init

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Created

Created container: probe

openstack

metallb-speaker

placement-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" already present on machine

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-6ac23-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" already present on machine

openstack

kubelet

ironic-75c678c459-9mmbb

Created

Created container: ironic-api-log

openstack

kubelet

ironic-75c678c459-9mmbb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

multus

cinder-6ac23-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:f9dfcaf0f89625d12d35117fdd2448e5aac09548cca83600434fc7224d3d640d" already present on machine

openstack

kubelet

ironic-75c678c459-9mmbb

Started

Started container ironic-api-log

openstack

kubelet

cinder-6ac23-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

multus

cinder-6ac23-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.235/23] from ovn-kubernetes

openstack

multus

cinder-6ac23-backup-0

AddedInterface

Add eth0 [10.128.0.236/23] from ovn-kubernetes
(x2)

openstack

kubelet

ironic-75c678c459-9mmbb

Created

Created container: ironic-api
(x2)

openstack

kubelet

ironic-75c678c459-9mmbb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine
(x2)

openstack

kubelet

ironic-75c678c459-9mmbb

Started

Started container ironic-api

openstack

kubelet

ironic-85b75c94bc-pp6mc

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

kubelet

cinder-6ac23-backup-0

Created

Created container: cinder-backup

openstack

kubelet

ironic-85b75c94bc-pp6mc

Started

Started container ironic-api-log

openstack

kubelet

cinder-6ac23-backup-0

Started

Started container cinder-backup

openstack

kubelet

cinder-6ac23-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:134770a92dce9d2daaef4cc63d7bea88edd55e2710e7c7457c4ee3d14469fbe7" already present on machine

openstack

kubelet

ironic-85b75c94bc-pp6mc

Created

Created container: ironic-api-log

openstack

kubelet

ironic-85b75c94bc-pp6mc

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:02f558cf4c54f02f0032f919b1e02673d2f65438aa1d837ab4587728c50cbc2f" already present on machine

openstack

kubelet

cinder-6ac23-backup-0

Started

Started container probe

openstack

multus

openstackclient

AddedInterface

Add eth0 [10.128.0.237/23] from ovn-kubernetes

openstack

kubelet

cinder-6ac23-backup-0

Created

Created container: probe

openstack

kubelet

ironic-85b75c94bc-pp6mc

Started

Started container ironic-api

openstack

kubelet

ironic-85b75c94bc-pp6mc

Created

Created container: ironic-api

openstack

kubelet

openstackclient

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:f446a1c7e6aed77f28fca3c632fb8d356e361e784dc15d5dc1e235886ab536bd"

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6"
(x2)

openstack

statefulset-controller

cinder-6ac23-scheduler

SuccessfulCreate

create Pod cinder-6ac23-scheduler-0 in StatefulSet cinder-6ac23-scheduler successful

openstack

metallb-speaker

cinder-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

replicaset-controller

dnsmasq-dns-7f74bd995c

SuccessfulDelete

Deleted pod: dnsmasq-dns-7f74bd995c-jflbg

openstack

kubelet

dnsmasq-dns-7f74bd995c-jflbg

Killing

Stopping container dnsmasq-dns

openstack

multus

cinder-6ac23-scheduler-0

AddedInterface

Add eth0 [10.128.0.238/23] from ovn-kubernetes

openstack

kubelet

cinder-6ac23-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" already present on machine

openstack

kubelet

cinder-6ac23-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

cinder-6ac23-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:dd54b357df51b40fef6ecfd5f1541602d493e4935acab02e30ef605c916617c4" already present on machine
(x3)

openstack

kubelet

ironic-75c678c459-9mmbb

BackOff

Back-off restarting failed container ironic-api in pod ironic-75c678c459-9mmbb_openstack(9ef8199f-6610-44a1-b85c-fc7f2bea6294)

openstack

kubelet

cinder-6ac23-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-6ac23-scheduler-0

Started

Started container probe

openstack

kubelet

cinder-6ac23-scheduler-0

Created

Created container: probe

openstack

replicaset-controller

swift-proxy-675fbd6d58

SuccessfulCreate

Created pod: swift-proxy-675fbd6d58-pdtfj

openstack

deployment-controller

swift-proxy

ScalingReplicaSet

Scaled up replica set swift-proxy-675fbd6d58 to 1
(x3)

openstack

statefulset-controller

glance-8705a-default-internal-api

SuccessfulDelete

delete Pod glance-8705a-default-internal-api-0 in StatefulSet glance-8705a-default-internal-api successful

openstack

kubelet

glance-8705a-default-internal-api-0

Killing

Stopping container glance-log

openstack

kubelet

glance-8705a-default-internal-api-0

Killing

Stopping container glance-httpd

openstack

job-controller

ironic-inspector-db-sync

SuccessfulCreate

Created pod: ironic-inspector-db-sync-8hw9n

openstack

kubelet

ironic-75c678c459-9mmbb

Killing

Stopping container ironic-api-log

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled down replica set ironic-75c678c459 to 0 from 1

openstack

replicaset-controller

ironic-75c678c459

SuccessfulDelete

Deleted pod: ironic-75c678c459-9mmbb
(x2)

openstack

kubelet

ironic-neutron-agent-7d8f6784f6-dqjdm

BackOff

Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-7d8f6784f6-dqjdm_openstack(72106e8c-2a98-4a82-9f36-c820986c5665)

openstack

kubelet

swift-proxy-675fbd6d58-pdtfj

Created

Created container: proxy-httpd

openstack

multus

swift-proxy-675fbd6d58-pdtfj

AddedInterface

Add eth0 [10.128.0.239/23] from ovn-kubernetes
(x3)

openstack

metallb-speaker

ironic-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

swift-proxy-675fbd6d58-pdtfj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123" already present on machine

openstack

kubelet

swift-proxy-675fbd6d58-pdtfj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:bed63ddf64b7a100451f17bc370e74648fb3db9db0d3c538b07396a00fdbd123" already present on machine

openstack

kubelet

ironic-inspector-db-sync-8hw9n

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38"

openstack

kubelet

swift-proxy-675fbd6d58-pdtfj

Started

Started container proxy-httpd

openstack

multus

ironic-inspector-db-sync-8hw9n

AddedInterface

Add eth0 [10.128.0.240/23] from ovn-kubernetes

openstack

kubelet

swift-proxy-675fbd6d58-pdtfj

Started

Started container proxy-server

openstack

kubelet

swift-proxy-675fbd6d58-pdtfj

Created

Created container: proxy-server

openstack

job-controller

nova-cell1-db-create

SuccessfulCreate

Created pod: nova-cell1-db-create-xrhk2

openstack

job-controller

nova-api-db-create

SuccessfulCreate

Created pod: nova-api-db-create-pf58r

openstack

job-controller

nova-cell0-db-create

SuccessfulCreate

Created pod: nova-cell0-db-create-d8zwm

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled down replica set neutron-55455d5d8d to 0 from 1

openstack

job-controller

nova-api-05a5-account-create-update

SuccessfulCreate

Created pod: nova-api-05a5-account-create-update-bt8vb

openstack

kubelet

neutron-55455d5d8d-zzwzz

Killing

Stopping container neutron-api

openstack

replicaset-controller

neutron-55455d5d8d

SuccessfulDelete

Deleted pod: neutron-55455d5d8d-zzwzz

openstack

kubelet

neutron-55455d5d8d-zzwzz

Killing

Stopping container neutron-httpd

openstack

job-controller

nova-cell1-d8b9-account-create-update

SuccessfulCreate

Created pod: nova-cell1-d8b9-account-create-update-kq9f4

openstack

job-controller

nova-cell0-7331-account-create-update

SuccessfulCreate

Created pod: nova-cell0-7331-account-create-update-4cdxr

openstack

kubelet

glance-8705a-default-external-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-8705a-default-external-api-0

Killing

Stopping container glance-log
(x3)

openstack

statefulset-controller

glance-8705a-default-external-api

SuccessfulDelete

delete Pod glance-8705a-default-external-api-0 in StatefulSet glance-8705a-default-external-api successful

openstack

metallb-speaker

swift-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

glance-8705a-default-external-api-0

Unhealthy

Readiness probe failed: Get "https://10.128.0.215:9292/healthcheck": dial tcp 10.128.0.215:9292: connect: connection refused

openstack

kubelet

glance-8705a-default-external-api-0

Unhealthy

Readiness probe failed: Get "https://10.128.0.215:9292/healthcheck": dial tcp 10.128.0.215:9292: connect: connection refused
(x4)

openstack

statefulset-controller

glance-8705a-default-internal-api

SuccessfulCreate

create Pod glance-8705a-default-internal-api-0 in StatefulSet glance-8705a-default-internal-api successful

openstack

kubelet

ironic-inspector-db-sync-8hw9n

Started

Started container ironic-inspector-db-sync
(x4)

openstack

metallb-speaker

neutron-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

openstackclient

Started

Started container openstackclient

openstack

kubelet

ironic-inspector-db-sync-8hw9n

Created

Created container: ironic-inspector-db-sync

openstack

kubelet

ironic-inspector-db-sync-8hw9n

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" in 9.895s (9.895s including waiting). Image size: 539826777 bytes.

openstack

kubelet

openstackclient

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:f446a1c7e6aed77f28fca3c632fb8d356e361e784dc15d5dc1e235886ab536bd" in 17.812s (17.812s including waiting). Image size: 594534254 bytes.

openstack

kubelet

openstackclient

Created

Created container: openstackclient

openstack

kubelet

nova-cell0-7331-account-create-update-4cdxr

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-cell0-7331-account-create-update-4cdxr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

nova-cell0-7331-account-create-update-4cdxr

Started

Started container mariadb-account-create-update

openstack

multus

nova-cell0-7331-account-create-update-4cdxr

AddedInterface

Add eth0 [10.128.0.245/23] from ovn-kubernetes

openstack

kubelet

nova-api-db-create-pf58r

Started

Started container mariadb-database-create

openstack

kubelet

nova-api-db-create-pf58r

Created

Created container: mariadb-database-create

openstack

kubelet

nova-api-db-create-pf58r

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

nova-api-db-create-pf58r

AddedInterface

Add eth0 [10.128.0.241/23] from ovn-kubernetes

openstack

kubelet

nova-api-05a5-account-create-update-bt8vb

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-api-05a5-account-create-update-bt8vb

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-api-05a5-account-create-update-bt8vb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

multus

nova-api-05a5-account-create-update-bt8vb

AddedInterface

Add eth0 [10.128.0.244/23] from ovn-kubernetes

openstack

multus

nova-cell0-db-create-d8zwm

AddedInterface

Add eth0 [10.128.0.242/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-db-create-d8zwm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

nova-cell0-db-create-d8zwm

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell0-db-create-d8zwm

Started

Started container mariadb-database-create

openstack

multus

nova-cell1-d8b9-account-create-update-kq9f4

AddedInterface

Add eth0 [10.128.0.246/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-d8b9-account-create-update-kq9f4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

nova-cell1-d8b9-account-create-update-kq9f4

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-cell1-d8b9-account-create-update-kq9f4

Started

Started container mariadb-account-create-update

openstack

multus

nova-cell1-db-create-xrhk2

AddedInterface

Add eth0 [10.128.0.243/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-db-create-xrhk2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:bafa959fd4a24c80de0c6b1c5adbf2b44992312068ca741c6a0717d49c919658" already present on machine

openstack

kubelet

nova-cell1-db-create-xrhk2

Created

Created container: mariadb-database-create
(x3)

openstack

kubelet

ironic-neutron-agent-7d8f6784f6-dqjdm

Created

Created container: ironic-neutron-agent
(x3)

openstack

kubelet

ironic-neutron-agent-7d8f6784f6-dqjdm

Started

Started container ironic-neutron-agent
(x2)

openstack

kubelet

ironic-neutron-agent-7d8f6784f6-dqjdm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:a3f5b519c7fc33e9f66fe553a7bc5cce51c3ff01223190cfa93bb75149a1dfcc" already present on machine

openstack

kubelet

nova-cell1-db-create-xrhk2

Started

Started container mariadb-database-create
(x4)

openstack

statefulset-controller

glance-8705a-default-external-api

SuccessfulCreate

create Pod glance-8705a-default-external-api-0 in StatefulSet glance-8705a-default-external-api successful

openstack

multus

glance-8705a-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.247/23] from ovn-kubernetes

openstack

multus

glance-8705a-default-internal-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

glance-8705a-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-8705a-default-internal-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-8705a-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

glance-8705a-default-internal-api-0

Started

Started container glance-httpd

openstack

kubelet

glance-8705a-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-8705a-default-internal-api-0

Started

Started container glance-log

openstack

multus

glance-8705a-default-external-api-0

AddedInterface

Add eth0 [10.128.0.248/23] from ovn-kubernetes

openstack

multus

glance-8705a-default-external-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

kubelet

glance-8705a-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

kubelet

glance-8705a-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

glance-8705a-default-external-api-0

Started

Started container glance-log

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-python-agent-init

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6" in 25.847s (25.847s including waiting). Image size: 786789676 bytes.

openstack

kubelet

glance-8705a-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:69927bf0036b7c213c1fa7c3879c3d6a7690de68bbbe38f078f45f21708e3416" already present on machine

openstack

job-controller

nova-api-05a5-account-create-update

Completed

Job completed

openstack

job-controller

nova-cell0-db-create

Completed

Job completed

openstack

job-controller

ironic-inspector-db-sync

Completed

Job completed

openstack

kubelet

glance-8705a-default-external-api-0

Created

Created container: glance-httpd

openstack

job-controller

nova-cell1-d8b9-account-create-update

Completed

Job completed

openstack

kubelet

glance-8705a-default-external-api-0

Started

Started container glance-httpd

openstack

job-controller

nova-api-db-create

Completed

Job completed

openstack

job-controller

nova-cell1-db-create

Completed

Job completed

openstack

job-controller

nova-cell0-7331-account-create-update

Completed

Job completed
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

replicaset-controller

dnsmasq-dns-7cc6c67c77

SuccessfulCreate

Created pod: dnsmasq-dns-7cc6c67c77-h5cpc

openstack

cert-manager-certificates-key-manager

ironic-inspector-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-4vfwt"

openstack

metallb-controller

ironic-inspector-internal

IPAllocated

Assigned IP ["192.168.122.80"]
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificates-trigger

ironic-inspector-internal-svc

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

cert-manager-certificates-request-manager

ironic-inspector-internal-svc

Requested

Created new CertificateRequest resource "ironic-inspector-internal-svc-1"

openstack

cert-manager-certificaterequests-approver

ironic-inspector-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

cert-manager-certificates-issuing

ironic-inspector-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

ironic-inspector-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-svc

Requested

Created new CertificateRequest resource "ironic-inspector-public-svc-1"

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-svc-6v49k"

openstack

cert-manager-certificates-trigger

ironic-inspector-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6" already present on machine

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.250/23] from ovn-kubernetes

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Started

Started container dnsmasq-dns

openstack

multus

dnsmasq-dns-7cc6c67c77-h5cpc

AddedInterface

Add eth0 [10.128.0.249/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Created

Created container: init

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Started

Started container init

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32"

openstack

cert-manager-certificates-issuing

ironic-inspector-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-route

Requested

Created new CertificateRequest resource "ironic-inspector-public-route-1"

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-route

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-route-55pmt"

openstack

cert-manager-certificates-trigger

ironic-inspector-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x3)

openstack

metallb-speaker

glance-default-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

statefulset-controller

ironic-inspector

SuccessfulDelete

delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

job-controller

nova-cell0-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell0-conductor-db-sync-vdhjz

openstack

kubelet

nova-cell0-conductor-db-sync-vdhjz

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea"

openstack

multus

nova-cell0-conductor-db-sync-vdhjz

AddedInterface

Add eth0 [10.128.0.251/23] from ovn-kubernetes

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" in 6.206s (6.206s including waiting). Image size: 657316612 bytes.

openstack

kubelet

ironic-inspector-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" in 6.202s (6.202s including waiting). Image size: 657316612 bytes.

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Killing

Stopping container inspector-pxe-init

openstack

kubelet

ironic-conductor-0

Created

Created container: pxe-init

openstack

kubelet

ironic-conductor-0

Started

Started container pxe-init

openstack

replicaset-controller

dnsmasq-dns-6b45666449

SuccessfulDelete

Deleted pod: dnsmasq-dns-6b45666449-v77b5
(x2)

openstack

statefulset-controller

ironic-inspector

SuccessfulCreate

create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

kubelet

dnsmasq-dns-6b45666449-v77b5

Killing

Stopping container dnsmasq-dns

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.252/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-conductor-db-sync-vdhjz

Created

Created container: nova-cell0-conductor-db-sync

openstack

kubelet

nova-cell0-conductor-db-sync-vdhjz

Started

Started container nova-cell0-conductor-db-sync

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:39c34f1c9081c33032671f13d154f7324f03ebc176102dabd8e22570a9afb5a6" already present on machine

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" already present on machine

openstack

kubelet

nova-cell0-conductor-db-sync-vdhjz

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" in 11.655s (11.655s including waiting). Image size: 668212205 bytes.

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" already present on machine

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-httpboot

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-httpboot

openstack

kubelet

ironic-inspector-0

Started

Started container ramdisk-logs

openstack

kubelet

ironic-inspector-0

Created

Created container: ramdisk-logs

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:af4ce32026a6f3e4a8984cc51ea160e31393ee7e63636e8c66157bcc2ccbcf38" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-dnsmasq

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-dnsmasq

openstack

metallb-speaker

ironic-inspector-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

nova-cell0-conductor-db-sync

Completed

Job completed

openstack

statefulset-controller

nova-cell0-conductor

SuccessfulCreate

create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful

openstack

multus

nova-cell0-conductor-0

AddedInterface

Add eth0 [10.128.0.253/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

kubelet

nova-cell0-conductor-0

Created

Created container: nova-cell0-conductor-conductor

openstack

kubelet

nova-cell0-conductor-0

Started

Started container nova-cell0-conductor-conductor

openstack

statefulset-controller

nova-cell1-compute-ironic-compute

SuccessfulCreate

create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful

openstack

job-controller

nova-cell0-cell-mapping

SuccessfulCreate

Created pod: nova-cell0-cell-mapping-mz9lj
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

multus

nova-cell1-compute-ironic-compute-0

AddedInterface

Add eth0 [10.128.0.255/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-6f6fd9d5d9

SuccessfulCreate

Created pod: dnsmasq-dns-6f6fd9d5d9-zff6h

openstack

metallb-controller

nova-metadata-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

multus

nova-cell0-cell-mapping-mz9lj

AddedInterface

Add eth0 [10.128.0.254/23] from ovn-kubernetes
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

cert-manager-certificates-trigger

nova-metadata-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

job-controller

nova-cell1-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell1-conductor-db-sync-7jt69

openstack

cert-manager-certificaterequests-issuer-venafi

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-metadata-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657"

openstack

cert-manager-certificaterequests-approver

nova-metadata-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.1/23] from ovn-kubernetes

openstack

cert-manager-certificates-key-manager

nova-metadata-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-metadata-internal-svc-88zn9"

openstack

cert-manager-certificates-request-manager

nova-metadata-internal-svc

Requested

Created new CertificateRequest resource "nova-metadata-internal-svc-1"

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.0/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617"

openstack

cert-manager-certificaterequests-issuer-vault

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-cell0-cell-mapping-mz9lj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.2/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d"

openstack

kubelet

nova-cell0-cell-mapping-mz9lj

Created

Created container: nova-manage

openstack

multus

dnsmasq-dns-6f6fd9d5d9-zff6h

AddedInterface

Add eth0 [10.128.1.3/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-cell-mapping-mz9lj

Started

Started container nova-manage

openstack

kubelet

nova-api-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657"

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-metadata-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

nova-cell1-conductor-db-sync-7jt69

AddedInterface

Add eth0 [10.128.1.5/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-svc

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1"

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-svc

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-5gjnp"

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-6f6fd9d5d9-zff6h

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-6f6fd9d5d9-zff6h

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

kubelet

dnsmasq-dns-6f6fd9d5d9-zff6h

Created

Created container: init

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.4/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-novncproxy-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:bd2b75f4a9e51369f7c6352ddcf6520afb1f3ea8795a683466b6802da3c26f77"

openstack

kubelet

nova-cell1-conductor-db-sync-7jt69

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6f6fd9d5d9-zff6h

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-route

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1"

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

dnsmasq-dns-6f6fd9d5d9-zff6h

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6f6fd9d5d9-zff6h

Started

Started container dnsmasq-dns

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-route

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-bkphv"

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-vencrypt

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-cell1-conductor-db-sync-7jt69

Started

Started container nova-cell1-conductor-db-sync

openstack

kubelet

nova-cell1-conductor-db-sync-7jt69

Created

Created container: nova-cell1-conductor-db-sync

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

nova-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617" in 3.428s (3.428s including waiting). Image size: 668216812 bytes.

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-vencrypt

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1"

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-vencrypt

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-vencrypt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-vencrypt

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-f6t7c"

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulDelete

delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" in 3.213s (3.213s including waiting). Image size: 685015783 bytes.

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" in 3.319s (3.319s including waiting). Image size: 685015783 bytes.

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:bd2b75f4a9e51369f7c6352ddcf6520afb1f3ea8795a683466b6802da3c26f77" in 2.897s (2.897s including waiting). Image size: 670576628 bytes.

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-cell1-novncproxy-0

Killing

Stopping container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-7cc6c67c77

SuccessfulDelete

Deleted pod: dnsmasq-dns-7cc6c67c77-h5cpc

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.2:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.2:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Created

Created container: nova-cell1-compute-ironic-compute-compute

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:60339e5e0cd7bfe18718bee79174c18ef91b932586fd96f01b9799d5d120385d" in 14.489s (14.489s including waiting). Image size: 1216089983 bytes.

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.6/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Started

Started container nova-cell1-compute-ironic-compute-compute

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556" already present on machine

openstack

job-controller

nova-cell0-cell-mapping

Completed

Job completed

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-conductor

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

job-controller

nova-cell1-conductor-db-sync

Completed

Job completed

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-conductor

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" already present on machine

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

statefulset-controller

nova-cell1-conductor

SuccessfulCreate

create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful

openstack

kubelet

ironic-conductor-0

Created

Created container: httpboot

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:b69f23539d13f02573c618e1c90c273bba14714b04dd7e70930a78bf0bf17f32" already present on machine

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

ironic-conductor-0

Started

Started container dnsmasq

openstack

kubelet

ironic-conductor-0

Created

Created container: dnsmasq

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

ironic-conductor-0

Started

Started container httpboot

openstack

multus

nova-cell1-conductor-0

AddedInterface

Add eth0 [10.128.1.7/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-conductor-0

Started

Started container nova-cell1-conductor-conductor

openstack

kubelet

nova-cell1-conductor-0

Created

Created container: nova-cell1-conductor-conductor

openstack

kubelet

nova-cell1-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.8/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

dnsmasq-dns-7cc6c67c77-h5cpc

Unhealthy

Readiness probe failed: dial tcp 10.128.0.249:5353: i/o timeout

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.9/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.10/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617" already present on machine

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.8:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.8:8775/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.9:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.9:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulCreate

create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.11/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:bd2b75f4a9e51369f7c6352ddcf6520afb1f3ea8795a683466b6802da3c26f77" already present on machine

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

replicaset-controller

dnsmasq-dns-7586c46c57

SuccessfulCreate

Created pod: dnsmasq-dns-7586c46c57-vgvpz
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificates-trigger

nova-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

metallb-controller

nova-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-request-manager

nova-public-svc

Requested

Created new CertificateRequest resource "nova-public-svc-1"

openstack

cert-manager-certificates-issuing

nova-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-7586c46c57-vgvpz

Started

Started container init

openstack

kubelet

dnsmasq-dns-7586c46c57-vgvpz

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7586c46c57-vgvpz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

multus

dnsmasq-dns-7586c46c57-vgvpz

AddedInterface

Add eth0 [10.128.1.12/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-vault

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

nova-public-svc

Generated

Stored new private key in temporary Secret resource "nova-public-svc-w8tr5"

openstack

cert-manager-certificaterequests-issuer-venafi

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-internal-svc

Requested

Created new CertificateRequest resource "nova-internal-svc-1"

openstack

cert-manager-certificaterequests-approver

nova-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

nova-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-internal-svc-k7xbl"

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-public-route

Generated

Stored new private key in temporary Secret resource "nova-public-route-ntnhq"

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

nova-public-route

Requested

Created new CertificateRequest resource "nova-public-route-1"

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7586c46c57-vgvpz

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7586c46c57-vgvpz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" already present on machine

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

nova-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-7586c46c57-vgvpz

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-approver

nova-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

job-controller

nova-cell1-cell-mapping

SuccessfulCreate

Created pod: nova-cell1-cell-mapping-rxn8v

openstack

job-controller

nova-cell1-host-discover

SuccessfulCreate

Created pod: nova-cell1-host-discover-stqn6

openstack

multus

nova-cell1-cell-mapping-rxn8v

AddedInterface

Add eth0 [10.128.1.13/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-host-discover-stqn6

Started

Started container nova-manage

openstack

kubelet

nova-cell1-cell-mapping-rxn8v

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-cell-mapping-rxn8v

Started

Started container nova-manage

openstack

kubelet

nova-cell1-cell-mapping-rxn8v

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

kubelet

nova-cell1-host-discover-stqn6

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-host-discover-stqn6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:aac83a622c312704170c997161bb183a4d79acb8cf46badb2eae802bfbfe6dea" already present on machine

openstack

multus

nova-cell1-host-discover-stqn6

AddedInterface

Add eth0 [10.128.1.14/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.15/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-6f6fd9d5d9

SuccessfulDelete

Deleted pod: dnsmasq-dns-6f6fd9d5d9-zff6h

openstack

kubelet

dnsmasq-dns-6f6fd9d5d9-zff6h

Killing

Stopping container dnsmasq-dns

openstack

job-controller

nova-cell1-host-discover

Completed

Job completed

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata
(x2)

openstack

statefulset-controller

nova-scheduler

SuccessfulDelete

delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful
(x3)

openstack

statefulset-controller

nova-metadata

SuccessfulDelete

delete Pod nova-metadata-0 in StatefulSet nova-metadata successful
(x3)

openstack

statefulset-controller

nova-api

SuccessfulDelete

delete Pod nova-api-0 in StatefulSet nova-api successful

openstack

job-controller

nova-cell1-cell-mapping

Completed

Job completed

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1
(x4)

openstack

statefulset-controller

nova-api

SuccessfulCreate

create Pod nova-api-0 in StatefulSet nova-api successful

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.16/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.8:8775/": read tcp 10.128.0.2:60452->10.128.1.8:8775: read: connection reset by peer

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.8:8775/": read tcp 10.128.0.2:60454->10.128.1.8:8775: read: connection reset by peer

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-api
(x4)

openstack

statefulset-controller

nova-metadata

SuccessfulCreate

create Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:84b234c61aceee3f56a9cb8ad107f25eb87d09f9d595abfff1e0e7e089c3f657" already present on machine

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.17/23] from ovn-kubernetes
(x3)

openstack

statefulset-controller

nova-scheduler

SuccessfulCreate

create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.18/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a5cc825ffbcba14182570fb5de656c801b2353bb65502896475a741907682617" already present on machine

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.16:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.16:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.17:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.17:8775/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x12)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-nodes of Type *v1.Service
(x12)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-nodes of Type *v1.Service
(x3)

openstack

metallb-speaker

nova-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

metallb-speaker

nova-metadata-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulDelete

Deleted pod: sushy-emulator-78f6d7d749-q2bh9

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-q2bh9

Killing

Stopping container sushy-emulator

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled down replica set sushy-emulator-78f6d7d749 to 0 from 1

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-84965d5d88 to 1

sushy-emulator

replicaset-controller

sushy-emulator-84965d5d88

SuccessfulCreate

Created pod: sushy-emulator-84965d5d88-6n2dg

sushy-emulator

multus

sushy-emulator-84965d5d88-6n2dg

AddedInterface

Add ironic [172.20.1.71/24] from sushy-emulator/ironic

sushy-emulator

kubelet

sushy-emulator-84965d5d88-6n2dg

Started

Started container sushy-emulator

sushy-emulator

multus

sushy-emulator-84965d5d88-6n2dg

AddedInterface

Add eth0 [10.128.1.19/23] from ovn-kubernetes

sushy-emulator

kubelet

sushy-emulator-84965d5d88-6n2dg

Pulled

Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" already present on machine

sushy-emulator

kubelet

sushy-emulator-84965d5d88-6n2dg

Created

Created container: sushy-emulator

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

ProbeError

Readiness probe error: Get "https://10.128.0.96:9443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Unhealthy

Readiness probe failed: Get "https://10.128.0.96:9443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

ProbeError

Readiness probe error: Get "https://10.128.0.96:9443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Unhealthy

Liveness probe failed: Get "http://10.128.0.156:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Unhealthy

Liveness probe failed: Get "http://10.128.0.156:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

monitoring-plugin-5d9ddb8754-xtrdd

Unhealthy

Readiness probe failed: Get "https://10.128.0.96:9443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

openstack-galera-0

Unhealthy

Readiness probe failed: command timed out

openshift-controller-manager

kubelet

controller-manager-c67bf58c9-mn7dg

Unhealthy

Readiness probe failed: Get "https://10.128.0.95:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-controller-manager

kubelet

controller-manager-c67bf58c9-mn7dg

ProbeError

Readiness probe error: Get "https://10.128.0.95:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body:

openshift-marketplace

kubelet

redhat-marketplace-qqt7p

Unhealthy

Readiness probe failed: timeout: failed to connect service ":50051" within 1s
(x2)

openstack

kubelet

openstack-cell1-galera-0

Unhealthy

Readiness probe failed: command timed out

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Unhealthy

Readiness probe failed: Get "http://10.128.0.144:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Unhealthy

Readiness probe failed: Get "http://10.128.0.146:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Unhealthy

Liveness probe failed: Get "http://10.128.0.146:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Unhealthy

Readiness probe failed: Get "http://10.128.0.146:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

cinder-6ac23-scheduler-0

Unhealthy

Liveness probe failed: Get "http://10.128.0.238:8080/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack

kubelet

openstack-galera-0

Unhealthy

Liveness probe failed: command timed out

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-5t6bt

Unhealthy

Liveness probe failed: Get "http://10.128.0.146:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack-operators

kubelet

glance-operator-controller-manager-784b5bb6c5-zfd69

Unhealthy

Readiness probe failed: Get "http://10.128.0.144:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

ProbeError

Readiness probe error: Get "https://10.128.0.101:9091/-/ready": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Unhealthy

Readiness probe failed: Get "https://10.128.0.101:9091/-/ready": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

ProbeError

Readiness probe error: Get "https://10.128.0.101:9091/-/ready": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:

openshift-monitoring

kubelet

thanos-querier-69565684c5-snfqm

Unhealthy

Readiness probe failed: Get "https://10.128.0.101:9091/-/ready": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x2)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Unhealthy

Readiness probe failed: Get "http://10.128.0.156:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-579b7786b9tqsfz

Unhealthy

Readiness probe failed: Get "http://10.128.0.156:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531685

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531685

SuccessfulCreate

Created pod: collect-profiles-29531685-l2l87

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531685-l2l87

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29531685-l2l87

AddedInterface

Add eth0 [10.128.1.20/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531685-l2l87

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531685-l2l87

Created

Created container: collect-profiles
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531685, condition: Complete

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29531640

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531685

Completed

Job completed

openshift-operator-lifecycle-manager

kubelet

packageserver-597975fc65-xcl6c

Unhealthy

Liveness probe failed: Get "https://10.128.0.51:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-operator-lifecycle-manager

kubelet

packageserver-597975fc65-xcl6c

ProbeError

Liveness probe error: Get "https://10.128.0.51:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openstack

kubelet

openstack-cell1-galera-0

Unhealthy

Liveness probe failed: command timed out

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Unhealthy

Liveness probe failed: Get "http://10.128.0.163:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Unhealthy

Readiness probe failed: Get "http://10.128.0.163:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Unhealthy

Readiness probe failed: Get "http://10.128.0.163:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack-operators

kubelet

openstack-operator-controller-manager-5dc486cffc-q59hq

Unhealthy

Liveness probe failed: Get "http://10.128.0.163:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Unhealthy

Liveness probe failed: Get "http://10.128.0.121:7472/metrics": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Unhealthy

Readiness probe failed: Get "http://10.128.0.121:7472/metrics": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Unhealthy

Liveness probe failed: Get "http://10.128.0.121:7472/metrics": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

metallb-system

kubelet

metallb-operator-webhook-server-559d754c8d-8sgn7

Unhealthy

Readiness probe failed: Get "http://10.128.0.121:7472/metrics": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531700

SuccessfulCreate

Created pod: collect-profiles-29531700-q4sct

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531700

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531700-q4sct

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29531700-q4sct

AddedInterface

Add eth0 [10.128.1.21/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531700-q4sct

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531700-q4sct

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29531655

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531700

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531700, condition: Complete

openstack

cronjob-controller

keystone-cron

SuccessfulCreate

Created job keystone-cron-29531701

openstack

job-controller

keystone-cron-29531701

SuccessfulCreate

Created pod: keystone-cron-29531701-28wv4

openstack

multus

keystone-cron-29531701-28wv4

AddedInterface

Add eth0 [10.128.1.22/23] from ovn-kubernetes

openstack

kubelet

keystone-cron-29531701-28wv4

Created

Created container: keystone-cron

openstack

kubelet

keystone-cron-29531701-28wv4

Started

Started container keystone-cron

openstack

kubelet

keystone-cron-29531701-28wv4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:f278ba51944ed62c43b97612829b4723befd0c69bf7f4bd305230a7e4ceb5ec8" already present on machine

openstack

job-controller

keystone-cron-29531701

Completed

Job completed

openstack

cronjob-controller

keystone-cron

SawCompletedJob

Saw completed job: keystone-cron-29531701, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531715

SuccessfulCreate

Created pod: collect-profiles-29531715-pdf5j

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29531715

openshift-operator-lifecycle-manager

multus

collect-profiles-29531715-pdf5j

AddedInterface

Add eth0 [10.128.1.23/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531715-pdf5j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531715-pdf5j

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29531715-pdf5j

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29531715

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29531670
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29531715, condition: Complete

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-k6m47 namespace