Time Namespace Component RelatedObject Reason Message

openshift-cluster-machine-approver

machine-approver-955fcfb87-cwdkv

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-955fcfb87-cwdkv to master-0

openstack

nova-api-8a73-account-create-update-s57x2

Scheduled

Successfully assigned openstack/nova-api-8a73-account-create-update-s57x2 to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-wp42j to master-0

openshift-multus

multus-admission-controller-56bbfd46b8-6qcf8

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-56bbfd46b8-6qcf8 to master-0

openstack-operators

openstack-operator-index-klqvq

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-klqvq to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8xj9 to master-0

openstack-operators

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-grrjr to master-0

openshift-multus

multus-admission-controller-cb4c85d9-8ltxz

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-cb4c85d9-8ltxz to master-0

openstack-operators

openstack-operator-controller-init-6f44f7b99f-fplrp

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-6f44f7b99f-fplrp to master-0

openstack-operators

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-9b9ff9f4d-s8tqw to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5 to master-0

openstack

glance-db-create-8jd9b

Scheduled

Successfully assigned openstack/glance-db-create-8jd9b to master-0

openstack

glance-db-sync-s9668

Scheduled

Successfully assigned openstack/glance-db-sync-s9668 to master-0

openstack

ironic-20ba-account-create-update-4dtlr

Scheduled

Successfully assigned openstack/ironic-20ba-account-create-update-4dtlr to master-0

openstack

ironic-6767bc4dd7-cp8fn

Scheduled

Successfully assigned openstack/ironic-6767bc4dd7-cp8fn to master-0

cert-manager

cert-manager-cainjector-5545bd876-p74j2

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-p74j2 to master-0

openstack

ironic-conductor-0

Scheduled

Successfully assigned openstack/ironic-conductor-0 to master-0

openstack

ironic-db-create-j9hg2

Scheduled

Successfully assigned openstack/ironic-db-create-j9hg2 to master-0

openstack

ironic-db-sync-mtvqh

Scheduled

Successfully assigned openstack/ironic-db-sync-mtvqh to master-0

openstack

ironic-f97759bbc-nbv8w

Scheduled

Successfully assigned openstack/ironic-f97759bbc-nbv8w to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openshift-ingress

router-default-79f8cd6fdd-858hg

Scheduled

Successfully assigned openshift-ingress/router-default-79f8cd6fdd-858hg to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

ironic-inspector-d904-account-create-update-tc485

Scheduled

Successfully assigned openstack/ironic-inspector-d904-account-create-update-tc485 to master-0

cert-manager

cert-manager-webhook-6888856db4-vmqtf

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-vmqtf to master-0

openstack-operators

telemetry-operator-controller-manager-5fdb694969-bbqxt

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-5fdb694969-bbqxt to master-0

openstack

ironic-inspector-db-sync-hst88

Scheduled

Successfully assigned openstack/ironic-inspector-db-sync-hst88 to master-0

openstack

ironic-neutron-agent-89874fdc8-kjtzj

Scheduled

Successfully assigned openstack/ironic-neutron-agent-89874fdc8-kjtzj to master-0

openstack

keystone-798d5f97fb-2sbnv

Scheduled

Successfully assigned openstack/keystone-798d5f97fb-2sbnv to master-0

openstack

keystone-bootstrap-m6pmp

Scheduled

Successfully assigned openstack/keystone-bootstrap-m6pmp to master-0

openstack

keystone-bootstrap-zh2n5

Scheduled

Successfully assigned openstack/keystone-bootstrap-zh2n5 to master-0

openstack

keystone-c490-account-create-update-rc6gq

Scheduled

Successfully assigned openstack/keystone-c490-account-create-update-rc6gq to master-0

openstack

keystone-cron-29548681-5hg8p

Scheduled

Successfully assigned openstack/keystone-cron-29548681-5hg8p to master-0

openstack

glance-213eb-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-213eb-default-internal-api-0 to master-0

openstack-operators

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-2plwq to master-0

openstack-operators

nova-operator-controller-manager-74b6b5dc96-ndppt

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-74b6b5dc96-ndppt to master-0

openstack

glance-213eb-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-213eb-default-internal-api-0 to master-0

openstack

keystone-db-create-q8tkd

Scheduled

Successfully assigned openstack/keystone-db-create-q8tkd to master-0

openstack

keystone-db-sync-cmr5t

Scheduled

Successfully assigned openstack/keystone-db-sync-cmr5t to master-0

openshift-route-controller-manager

route-controller-manager-6d8686f75f-9t2lk

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-6d8686f75f-9t2lk to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-pmr9d to master-0

openstack-operators

neutron-operator-controller-manager-54688575f-vj8dt

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-54688575f-vj8dt to master-0

openstack-operators

mariadb-operator-controller-manager-7b6bfb6475-j288g

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-j288g to master-0

openstack-operators

test-operator-controller-manager-55b5ff4dbb-9cpc2

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-55b5ff4dbb-9cpc2 to master-0

openshift-controller-manager

controller-manager-68f988879c-j2dj6

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-console-operator

console-operator-6c7fb6b958-2grlf

Scheduled

Successfully assigned openshift-console-operator/console-operator-6c7fb6b958-2grlf to master-0

openshift-controller-manager

controller-manager-68f988879c-j2dj6

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-6d8686f75f-9t2lk

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-68f988879c-j2dj6

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-68f988879c-j2dj6 to master-0

openshift-console

downloads-84f57b9877-dwqg9

Scheduled

Successfully assigned openshift-console/downloads-84f57b9877-dwqg9 to master-0

openstack-operators

manila-operator-controller-manager-67d996989d-7ksrz

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-7ksrz to master-0

openshift-multus

cni-sysctl-allowlist-ds-2hhhs

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-2hhhs to master-0

openshift-nmstate

nmstate-console-plugin-5dcbbd79cf-cbbp5

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-cbbp5 to master-0

openshift-multus

cni-sysctl-allowlist-ds-rhtr2

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-rhtr2 to master-0

openstack

memcached-0

Scheduled

Successfully assigned openstack/memcached-0 to master-0

openstack

neutron-b4a7-account-create-update-xgrjd

Scheduled

Successfully assigned openstack/neutron-b4a7-account-create-update-xgrjd to master-0

openstack

neutron-db-create-dkp4f

Scheduled

Successfully assigned openstack/neutron-db-create-dkp4f to master-0

openstack

neutron-db-sync-97jz8

Scheduled

Successfully assigned openstack/neutron-db-sync-97jz8 to master-0

openshift-nmstate

nmstate-handler-9lvhn

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-9lvhn to master-0

sushy-emulator

sushy-emulator-84965d5d88-9qs5d

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-84965d5d88-9qs5d to master-0

openshift-monitoring

kube-state-metrics-68b88f8cb5-5cj66

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-68b88f8cb5-5cj66 to master-0

openshift-monitoring

metrics-server-6fdfc4cfb9-d2n6q

Scheduled

Successfully assigned openshift-monitoring/metrics-server-6fdfc4cfb9-d2n6q to master-0

openshift-monitoring

monitoring-plugin-6bc88968b6-frbh2

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-6bc88968b6-frbh2 to master-0

openshift-monitoring

node-exporter-c8pdj

Scheduled

Successfully assigned openshift-monitoring/node-exporter-c8pdj to master-0

openshift-monitoring

openshift-state-metrics-74cc79fd76-84z7r

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-74cc79fd76-84z7r to master-0

openshift-nmstate

nmstate-metrics-69594cc75-26sjk

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-69594cc75-26sjk to master-0

openshift-nmstate

nmstate-operator-75c5dccd6c-548z6

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-75c5dccd6c-548z6 to master-0

openshift-console

console-6f9c4688bb-5k492

Scheduled

Successfully assigned openshift-console/console-6f9c4688bb-5k492 to master-0

openshift-nmstate

nmstate-webhook-786f45cff4-lsgfs

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-786f45cff4-lsgfs to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-4flmz

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-4flmz to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq to master-0

openstack

neutron-f49f69884-v8xz2

Scheduled

Successfully assigned openstack/neutron-f49f69884-v8xz2 to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9 to master-0

openshift-operators

observability-operator-59bdc8b94-sn8gn

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-sn8gn to master-0

openshift-operators

perses-operator-5bf474d74f-rcw9j

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-rcw9j to master-0

openshift-storage

lvms-operator-cc6c44d98-tvcmb

Scheduled

Successfully assigned openshift-storage/lvms-operator-cc6c44d98-tvcmb to master-0

openshift-storage

vg-manager-9nzbx

Scheduled

Successfully assigned openshift-storage/vg-manager-9nzbx to master-0

openstack

cinder-66da-account-create-update-cczxb

Scheduled

Successfully assigned openstack/cinder-66da-account-create-update-cczxb to master-0

openstack-operators

openstack-operator-index-zt56c

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-zt56c to master-0

openstack

cinder-86971-api-0

Scheduled

Successfully assigned openstack/cinder-86971-api-0 to master-0

openstack-operators

openstack-operator-index-klqvq

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-klqvq to master-0

openshift-cloud-credential-operator

cloud-credential-operator-55d85b7b47-7tb74

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-7tb74 to master-0

openshift-network-console

networking-console-plugin-5cbd49d755-2lmd2

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-5cbd49d755-2lmd2 to master-0

openshift-network-diagnostics

network-check-source-7c67b67d47-88mpr

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-diagnostics

network-check-source-7c67b67d47-88mpr

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-7c67b67d47-88mpr to master-0

openstack-operators

keystone-operator-controller-manager-7c789f89c6-zq79c

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-7c789f89c6-zq79c to master-0

openstack-operators

openstack-operator-index-zt56c

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-zt56c to master-0

openstack-operators

ovn-operator-controller-manager-75684d597f-ccbn4

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-75684d597f-ccbn4 to master-0

openstack-operators

placement-operator-controller-manager-648564c9fc-l7256

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-648564c9fc-l7256 to master-0

openstack-operators

placement-operator-controller-manager-648564c9fc-l7256

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-648564c9fc-l7256 to master-0

openshift-multus

cni-sysctl-allowlist-ds-rhtr2

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-rhtr2 to master-0

openshift-multus

cni-sysctl-allowlist-ds-2hhhs

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-2hhhs to master-0

openshift-monitoring

thanos-querier-9995cd46f-q546g

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-9995cd46f-q546g to master-0

openstack-operators

ironic-operator-controller-manager-545456dc4-xth7w

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-545456dc4-xth7w to master-0

openstack-operators

infra-operator-controller-manager-65b58d74b-rrd9h

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-65b58d74b-rrd9h to master-0

openstack-operators

watcher-operator-controller-manager-bccc79885-k5rcm

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-k5rcm to master-0

openstack-operators

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-rcxp2 to master-0

openstack-operators

heat-operator-controller-manager-cf99c678f-qmcr7

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-cf99c678f-qmcr7 to master-0

openstack-operators

glance-operator-controller-manager-64db6967f8-mq69x

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-64db6967f8-mq69x to master-0

openstack-operators

designate-operator-controller-manager-5d87c9d997-jzt22

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-5d87c9d997-jzt22 to master-0

sushy-emulator

nova-console-poller-849dd7bd7c-wlzjd

Scheduled

Successfully assigned sushy-emulator/nova-console-poller-849dd7bd7c-wlzjd to master-0

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-hjt7h to master-0

cert-manager

cert-manager-545d4d4674-fz858

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-fz858 to master-0

openshift-nmstate

nmstate-console-plugin-5dcbbd79cf-cbbp5

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-cbbp5 to master-0

openstack-operators

barbican-operator-controller-manager-6db6876945-nlssq

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-6db6876945-nlssq to master-0

sushy-emulator

nova-console-recorder-6bd67877d9-cd76q

Scheduled

Successfully assigned sushy-emulator/nova-console-recorder-6bd67877d9-cd76q to master-0

openstack-operators

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Scheduled

Successfully assigned openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5 to master-0

openshift-storage

vg-manager-9nzbx

Scheduled

Successfully assigned openshift-storage/vg-manager-9nzbx to master-0

openshift-storage

lvms-operator-cc6c44d98-tvcmb

Scheduled

Successfully assigned openshift-storage/lvms-operator-cc6c44d98-tvcmb to master-0

openshift-cluster-machine-approver

machine-approver-754bdc9f9d-bbz7l

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-754bdc9f9d-bbz7l to master-0

openshift-monitoring

telemeter-client-69ccf66766-q79sx

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-69ccf66766-q79sx to master-0

openshift-nmstate

nmstate-handler-9lvhn

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-9lvhn to master-0

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-lxzml

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-lxzml to master-0

openshift-nmstate

nmstate-metrics-69594cc75-26sjk

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-69594cc75-26sjk to master-0

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-5ff8674d55-nvm8t

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-5ff8674d55-nvm8t to master-0

openstack-operators

ovn-operator-controller-manager-75684d597f-ccbn4

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-75684d597f-ccbn4 to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8xj9 to master-0

openstack-operators

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-9b9ff9f4d-s8tqw to master-0

openstack-operators

telemetry-operator-controller-manager-5fdb694969-bbqxt

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-5fdb694969-bbqxt to master-0

openstack-operators

test-operator-controller-manager-55b5ff4dbb-9cpc2

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-55b5ff4dbb-9cpc2 to master-0

openshift-nmstate

nmstate-operator-75c5dccd6c-548z6

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-75c5dccd6c-548z6 to master-0

openstack-operators

watcher-operator-controller-manager-bccc79885-k5rcm

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-bccc79885-k5rcm to master-0

openshift-nmstate

nmstate-webhook-786f45cff4-lsgfs

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-786f45cff4-lsgfs to master-0

cert-manager

cert-manager-545d4d4674-fz858

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-fz858 to master-0

cert-manager

cert-manager-cainjector-5545bd876-p74j2

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-p74j2 to master-0

cert-manager

cert-manager-webhook-6888856db4-vmqtf

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-vmqtf to master-0

metallb-system

controller-86ddb6bd46-nx428

Scheduled

Successfully assigned metallb-system/controller-86ddb6bd46-nx428 to master-0

metallb-system

frr-k8s-9cvbt

Scheduled

Successfully assigned metallb-system/frr-k8s-9cvbt to master-0

metallb-system

frr-k8s-webhook-server-7f989f654f-vnw67

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-7f989f654f-vnw67 to master-0

metallb-system

metallb-operator-controller-manager-547df9ff8b-bpxrb

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-547df9ff8b-bpxrb to master-0

openshift-monitoring

telemeter-client-69ccf66766-q79sx

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-69ccf66766-q79sx to master-0

metallb-system

metallb-operator-webhook-server-57d6f574cc-8zmmh

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-57d6f574cc-8zmmh to master-0

openshift-monitoring

prometheus-operator-5ff8674d55-nvm8t

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-5ff8674d55-nvm8t to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

metallb-system

speaker-lnt6b

Scheduled

Successfully assigned metallb-system/speaker-lnt6b to master-0

openshift-monitoring

thanos-querier-9995cd46f-q546g

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-9995cd46f-q546g to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-console

console-6594fcb745-7lf8n

Scheduled

Successfully assigned openshift-console/console-6594fcb745-7lf8n to master-0

openshift-monitoring

openshift-state-metrics-74cc79fd76-84z7r

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-74cc79fd76-84z7r to master-0

openshift-monitoring

node-exporter-c8pdj

Scheduled

Successfully assigned openshift-monitoring/node-exporter-c8pdj to master-0

openshift-monitoring

monitoring-plugin-6bc88968b6-frbh2

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-6bc88968b6-frbh2 to master-0

openshift-monitoring

metrics-server-6fdfc4cfb9-d2n6q

Scheduled

Successfully assigned openshift-monitoring/metrics-server-6fdfc4cfb9-d2n6q to master-0

openshift-monitoring

kube-state-metrics-68b88f8cb5-5cj66

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-68b88f8cb5-5cj66 to master-0

openshift-insights

insights-operator-8f89dfddd-rlx9x

Scheduled

Successfully assigned openshift-insights/insights-operator-8f89dfddd-rlx9x to master-0

openshift-console

console-64d844fb5f-9b28j

Scheduled

Successfully assigned openshift-console/console-64d844fb5f-9b28j to master-0

openshift-multus

multus-admission-controller-cb4c85d9-8ltxz

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-cb4c85d9-8ltxz to master-0

openstack-operators

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-grrjr to master-0

openstack

cinder-86971-api-0

Scheduled

Successfully assigned openstack/cinder-86971-api-0 to master-0

openstack-operators

openstack-operator-controller-init-6f44f7b99f-fplrp

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-6f44f7b99f-fplrp to master-0

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dqvvb

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-69576476f7-dqvvb to master-0

openshift-console

console-5c96487ddf-5r2nd

Scheduled

Successfully assigned openshift-console/console-5c96487ddf-5r2nd to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-marketplace

redhat-operators-fdltd

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-fdltd to master-0

openshift-cluster-samples-operator

cluster-samples-operator-664cb58b85-fmzk7

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fmzk7 to master-0

openshift-marketplace

redhat-marketplace-z2cc9

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-z2cc9 to master-0

openstack

cinder-86971-backup-0

Scheduled

Successfully assigned openstack/cinder-86971-backup-0 to master-0

openstack

cinder-86971-backup-0

Scheduled

Successfully assigned openstack/cinder-86971-backup-0 to master-0

openstack

glance-3631-account-create-update-8m8jf

Scheduled

Successfully assigned openstack/glance-3631-account-create-update-8m8jf to master-0

openstack

neutron-fd8d8c7c7-w5vwh

Scheduled

Successfully assigned openstack/neutron-fd8d8c7c7-w5vwh to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openshift-marketplace

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Scheduled

Successfully assigned openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb to master-0

openshift-authentication

oauth-openshift-7ff74686db-b9jm5

FailedScheduling

skip schedule deleting pod: openshift-authentication/oauth-openshift-7ff74686db-b9jm5

openshift-authentication

oauth-openshift-7ff74686db-b9jm5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-7ff74686db-b9jm5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cluster-storage-operator

cluster-storage-operator-6fbfc8dc8f-v48jn

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-v48jn to master-0

openshift-marketplace

community-operators-rw59s

Scheduled

Successfully assigned openshift-marketplace/community-operators-rw59s to master-0

openshift-marketplace

certified-operators-vxpb5

Scheduled

Successfully assigned openshift-marketplace/certified-operators-vxpb5 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

glance-213eb-default-external-api-0

Scheduled

Successfully assigned openstack/glance-213eb-default-external-api-0 to master-0

openshift-authentication

oauth-openshift-6c8ccbd44d-m8w7j

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6c8ccbd44d-m8w7j to master-0

openstack

cinder-86971-db-sync-m7xht

Scheduled

Successfully assigned openstack/cinder-86971-db-sync-m7xht to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5 to master-0

metallb-system

controller-86ddb6bd46-nx428

Scheduled

Successfully assigned metallb-system/controller-86ddb6bd46-nx428 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openshift-marketplace

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Scheduled

Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w to master-0

openstack

nova-api-db-create-94ssk

Scheduled

Successfully assigned openstack/nova-api-db-create-94ssk to master-0

openstack

nova-cell0-0300-account-create-update-b66m5

Scheduled

Successfully assigned openstack/nova-cell0-0300-account-create-update-b66m5 to master-0

openstack

nova-cell0-cell-mapping-c5sqt

Scheduled

Successfully assigned openstack/nova-cell0-cell-mapping-c5sqt to master-0

openstack

nova-cell0-conductor-0

Scheduled

Successfully assigned openstack/nova-cell0-conductor-0 to master-0

openstack

nova-cell0-conductor-db-sync-9xm4p

Scheduled

Successfully assigned openstack/nova-cell0-conductor-db-sync-9xm4p to master-0

openstack

nova-cell0-db-create-64285

Scheduled

Successfully assigned openstack/nova-cell0-db-create-64285 to master-0

openstack

nova-cell1-2b75-account-create-update-gqckp

Scheduled

Successfully assigned openstack/nova-cell1-2b75-account-create-update-gqckp to master-0

openstack

nova-cell1-cell-mapping-8cwkr

Scheduled

Successfully assigned openstack/nova-cell1-cell-mapping-8cwkr to master-0

metallb-system

frr-k8s-9cvbt

Scheduled

Successfully assigned metallb-system/frr-k8s-9cvbt to master-0

openstack

glance-213eb-default-external-api-0

Scheduled

Successfully assigned openstack/glance-213eb-default-external-api-0 to master-0

openstack

glance-213eb-default-external-api-0

Scheduled

Successfully assigned openstack/glance-213eb-default-external-api-0 to master-0

openstack

nova-cell1-compute-ironic-compute-0

Scheduled

Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0

openstack

nova-cell1-conductor-0

Scheduled

Successfully assigned openstack/nova-cell1-conductor-0 to master-0

openstack

nova-cell1-conductor-db-sync-2rz24

Scheduled

Successfully assigned openstack/nova-cell1-conductor-db-sync-2rz24 to master-0

openstack

nova-cell1-db-create-26xt5

Scheduled

Successfully assigned openstack/nova-cell1-db-create-26xt5 to master-0

openstack

nova-cell1-host-discover-8g65x

Scheduled

Successfully assigned openstack/nova-cell1-host-discover-8g65x to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

openstack-cell1-galera-0

Scheduled

Successfully assigned openstack/openstack-cell1-galera-0 to master-0

openstack

openstack-galera-0

Scheduled

Successfully assigned openstack/openstack-galera-0 to master-0

openstack

openstackclient

Scheduled

Successfully assigned openstack/openstackclient to master-0

openstack

ovn-controller-metrics-h69l5

Scheduled

Successfully assigned openstack/ovn-controller-metrics-h69l5 to master-0

openstack

ovn-controller-ovs-csxfx

Scheduled

Successfully assigned openstack/ovn-controller-ovs-csxfx to master-0

openstack

ovn-controller-wptpb

Scheduled

Successfully assigned openstack/ovn-controller-wptpb to master-0

openstack

ovn-northd-0

Scheduled

Successfully assigned openstack/ovn-northd-0 to master-0

openstack

ovsdbserver-nb-0

Scheduled

Successfully assigned openstack/ovsdbserver-nb-0 to master-0

openstack

ovsdbserver-sb-0

Scheduled

Successfully assigned openstack/ovsdbserver-sb-0 to master-0

metallb-system

frr-k8s-webhook-server-7f989f654f-vnw67

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-7f989f654f-vnw67 to master-0

openstack

placement-326e-account-create-update-g8dbq

Scheduled

Successfully assigned openstack/placement-326e-account-create-update-g8dbq to master-0

openstack

placement-5dbd89f674-7gtrq

Scheduled

Successfully assigned openstack/placement-5dbd89f674-7gtrq to master-0

openstack

placement-6cc7544794-vmcq4

Scheduled

Successfully assigned openstack/placement-6cc7544794-vmcq4 to master-0

openstack

placement-db-create-shqgh

Scheduled

Successfully assigned openstack/placement-db-create-shqgh to master-0

openstack

placement-db-sync-4wkkv

Scheduled

Successfully assigned openstack/placement-db-sync-4wkkv to master-0

openstack

rabbitmq-cell1-server-0

Scheduled

Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0

openstack

rabbitmq-server-0

Scheduled

Successfully assigned openstack/rabbitmq-server-0 to master-0

openstack

root-account-create-update-gshbr

Scheduled

Successfully assigned openstack/root-account-create-update-gshbr to master-0

openstack

root-account-create-update-m995r

Scheduled

Successfully assigned openstack/root-account-create-update-m995r to master-0

metallb-system

metallb-operator-controller-manager-547df9ff8b-bpxrb

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-547df9ff8b-bpxrb to master-0

openstack

swift-proxy-7b675b8b94-rfvgr

Scheduled

Successfully assigned openstack/swift-proxy-7b675b8b94-rfvgr to master-0

openstack

swift-ring-rebalance-796n4

Scheduled

Successfully assigned openstack/swift-ring-rebalance-796n4 to master-0

openstack

swift-ring-rebalance-gr69d

Scheduled

Successfully assigned openstack/swift-ring-rebalance-gr69d to master-0

openstack

swift-storage-0

Scheduled

Successfully assigned openstack/swift-storage-0 to master-0

openstack-operators

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Scheduled

Successfully assigned openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5 to master-0

openstack-operators

barbican-operator-controller-manager-6db6876945-nlssq

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-6db6876945-nlssq to master-0

openstack

dnsmasq-dns-d6c6c44c5-7fbfp

Scheduled

Successfully assigned openstack/dnsmasq-dns-d6c6c44c5-7fbfp to master-0

metallb-system

metallb-operator-webhook-server-57d6f574cc-8zmmh

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-57d6f574cc-8zmmh to master-0

openstack

dnsmasq-dns-8459745b77-pkh7k

Scheduled

Successfully assigned openstack/dnsmasq-dns-8459745b77-pkh7k to master-0

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-55d77d7b5c-hjt7h to master-0

openstack

dnsmasq-dns-7fb78888f7-pwtc8

Scheduled

Successfully assigned openstack/dnsmasq-dns-7fb78888f7-pwtc8 to master-0

openstack

dnsmasq-dns-7f654db4c5-5b5lg

Scheduled

Successfully assigned openstack/dnsmasq-dns-7f654db4c5-5b5lg to master-0

openstack-operators

designate-operator-controller-manager-5d87c9d997-jzt22

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-5d87c9d997-jzt22 to master-0

openstack

dnsmasq-dns-7bbc6577f5-mldsh

Scheduled

Successfully assigned openstack/dnsmasq-dns-7bbc6577f5-mldsh to master-0

openstack-operators

glance-operator-controller-manager-64db6967f8-mq69x

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-64db6967f8-mq69x to master-0

openstack

dnsmasq-dns-7754f44b87-jrdnd

Scheduled

Successfully assigned openstack/dnsmasq-dns-7754f44b87-jrdnd to master-0

openstack

dnsmasq-dns-76ff7d945-qtbgb

Scheduled

Successfully assigned openstack/dnsmasq-dns-76ff7d945-qtbgb to master-0

openstack-operators

heat-operator-controller-manager-cf99c678f-qmcr7

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-cf99c678f-qmcr7 to master-0

openstack

dnsmasq-dns-7466868675-m4658

Scheduled

Successfully assigned openstack/dnsmasq-dns-7466868675-m4658 to master-0

openstack

dnsmasq-dns-69fd45f56f-msd9g

Scheduled

Successfully assigned openstack/dnsmasq-dns-69fd45f56f-msd9g to master-0

openstack-operators

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-rcxp2 to master-0

openstack

dnsmasq-dns-699fc4cfdf-cmxnl

Scheduled

Successfully assigned openstack/dnsmasq-dns-699fc4cfdf-cmxnl to master-0

openstack

dnsmasq-dns-667b9d65dc-vfb6d

Scheduled

Successfully assigned openstack/dnsmasq-dns-667b9d65dc-vfb6d to master-0

metallb-system

speaker-lnt6b

Scheduled

Successfully assigned metallb-system/speaker-lnt6b to master-0

openstack-operators

infra-operator-controller-manager-65b58d74b-rrd9h

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-65b58d74b-rrd9h to master-0

openstack

dnsmasq-dns-6465c5fc85-2kk4v

Scheduled

Successfully assigned openstack/dnsmasq-dns-6465c5fc85-2kk4v to master-0

openstack

dnsmasq-dns-5f5db5bd5-2tvbr

Scheduled

Successfully assigned openstack/dnsmasq-dns-5f5db5bd5-2tvbr to master-0

openstack-operators

ironic-operator-controller-manager-545456dc4-xth7w

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-545456dc4-xth7w to master-0

openstack

dnsmasq-dns-5cc8bb4897-sws9x

Scheduled

Successfully assigned openstack/dnsmasq-dns-5cc8bb4897-sws9x to master-0

openstack

dnsmasq-dns-58dc6c9559-pt84w

Scheduled

Successfully assigned openstack/dnsmasq-dns-58dc6c9559-pt84w to master-0

openstack-operators

keystone-operator-controller-manager-7c789f89c6-zq79c

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-7c789f89c6-zq79c to master-0

openstack

dnsmasq-dns-589dd8c5c-bm6b7

Scheduled

Successfully assigned openstack/dnsmasq-dns-589dd8c5c-bm6b7 to master-0

openstack

dnsmasq-dns-5787b6ddf7-gjnck

Scheduled

Successfully assigned openstack/dnsmasq-dns-5787b6ddf7-gjnck to master-0

openstack-operators

manila-operator-controller-manager-67d996989d-7ksrz

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-67d996989d-7ksrz to master-0

openshift-authentication

oauth-openshift-698d9d45c9-5wh7z

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-698d9d45c9-5wh7z to master-0

openshift-authentication

oauth-openshift-698d9d45c9-5wh7z

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-marketplace

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Scheduled

Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg to master-0

openshift-authentication

oauth-openshift-67c6dd6955-hbksv

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-67c6dd6955-hbksv to master-0

openshift-authentication

oauth-openshift-67c6dd6955-hbksv

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dqvvb

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-69576476f7-dqvvb to master-0

openstack

cinder-86971-scheduler-0

Scheduled

Successfully assigned openstack/cinder-86971-scheduler-0 to master-0

openstack

cinder-86971-scheduler-0

Scheduled

Successfully assigned openstack/cinder-86971-scheduler-0 to master-0

openstack-operators

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-2plwq to master-0

openstack

cinder-86971-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-86971-volume-lvm-iscsi-0 to master-0

openstack

cinder-86971-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-86971-volume-lvm-iscsi-0 to master-0

openshift-authentication

oauth-openshift-578bc8c86c-mczhd

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-578bc8c86c-mczhd to master-0

openstack-operators

nova-operator-controller-manager-74b6b5dc96-ndppt

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-74b6b5dc96-ndppt to master-0

openstack

cinder-db-create-5hn4x

Scheduled

Successfully assigned openstack/cinder-db-create-5hn4x to master-0

openstack-operators

neutron-operator-controller-manager-54688575f-vj8dt

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-54688575f-vj8dt to master-0

openstack-operators

mariadb-operator-controller-manager-7b6bfb6475-j288g

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-j288g to master-0

openshift-machine-api

machine-api-operator-84bf6db4f9-t8jw4

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-84bf6db4f9-t8jw4 to master-0

openshift-marketplace

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk to master-0

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-lxzml

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-lxzml to master-0

openshift-monitoring

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

packageserver-f5bf97fcc-w82vx

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-f5bf97fcc-w82vx to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-4flmz

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-4flmz to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-marketplace

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Scheduled

Successfully assigned openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns to master-0

openshift-machine-config-operator

machine-config-server-xskwx

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-xskwx to master-0

sushy-emulator

sushy-emulator-78f6d7d749-xgc79

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-78f6d7d749-xgc79 to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq to master-0

openshift-image-registry

node-ca-tztzb

Scheduled

Successfully assigned openshift-image-registry/node-ca-tztzb to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9 to master-0

openshift-operators

observability-operator-59bdc8b94-sn8gn

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-sn8gn to master-0

openshift-operators

perses-operator-5bf474d74f-rcw9j

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-rcw9j to master-0

openshift-machine-config-operator

machine-config-operator-fdb5c78b5-rk7q8

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-fdb5c78b5-rk7q8 to master-0

openshift-machine-config-operator

machine-config-daemon-kp74q

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-kp74q to master-0

openshift-ingress

router-default-79f8cd6fdd-858hg

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openstack

glance-213eb-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-213eb-default-internal-api-0 to master-0

openstack

ironic-inspector-db-create-sdzv8

Scheduled

Successfully assigned openstack/ironic-inspector-db-create-sdzv8 to master-0

openshift-multus

multus-admission-controller-56bbfd46b8-6qcf8

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-56bbfd46b8-6qcf8 to master-0

openshift-machine-config-operator

machine-config-controller-ff46b7bdf-55p6v

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-ff46b7bdf-55p6v to master-0

openshift-ingress-canary

ingress-canary-7hzkm

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-7hzkm to master-0

openshift-machine-api

machine-api-operator-84bf6db4f9-t8jw4

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-84bf6db4f9-t8jw4 to master-0

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_a9ed9dde-a6da-43b0-b4d0-1caab857b6e7 became leader

kube-system

Required control plane pods have been created

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_ad5a6edc-dbf8-4986-a36b-135a56b03ce3 became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_3c4e5aac-2942-412b-bfa6-3a9959e3b71e became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_4f3b850c-1f0d-4780-b6fe-b89c1c3168fe became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace
(x2)

assisted-installer

job-controller

assisted-installer-controller

FailedCreate

Error creating: pods "assisted-installer-controller-" is forbidden: error looking up service account assisted-installer/assisted-installer-controller: serviceaccount "assisted-installer-controller" not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-mqwls

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_1621e4fc-8334-47e4-a2df-b4f7464e3390 became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_1571aba7-cdc1-4c34-8b95-4c3abeb15487 became leader

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-745944c6b7 to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_a1725727-202b-47ef-bc20-907dbfd3e08f became leader

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-77899cf6d to 1

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-7c649bf6d4 to 1

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-86d7cdfdfb to 1

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-5c74bfc494 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-69b6fc6b88 to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-7f65c457f5 to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-589895fbb7 to 1

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-64bf9778cb to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-7c6989d6c4 to 1

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-5884b9cd56 to 1

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-799b6db4d7 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-8565d84698 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-86d7cdfdfb

FailedCreate

Error creating: pods "kube-controller-manager-operator-86d7cdfdfb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-network-operator

replicaset-controller

network-operator-7c649bf6d4

FailedCreate

Error creating: pods "network-operator-7c649bf6d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

FailedCreate

Error creating: pods "cluster-version-operator-745944c6b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5c74bfc494

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5c74bfc494-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77899cf6d

FailedCreate

Error creating: pods "cluster-olm-operator-77899cf6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-7f65c457f5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-7f65c457f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-69b6fc6b88

FailedCreate

Error creating: pods "service-ca-operator-69b6fc6b88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-589895fbb7

FailedCreate

Error creating: pods "dns-operator-589895fbb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-5884b9cd56

FailedCreate

Error creating: pods "etcd-operator-5884b9cd56-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-799b6db4d7

FailedCreate

Error creating: pods "openshift-apiserver-operator-799b6db4d7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-8565d84698

FailedCreate

Error creating: pods "openshift-controller-manager-operator-8565d84698-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-66c7586884 to 1
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-7c6989d6c4

FailedCreate

Error creating: pods "authentication-operator-7c6989d6c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-854648ff6d to 1

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-5685fbc7d to 1

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-66c7586884 to 1
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-64bf9778cb

FailedCreate

Error creating: pods "marketplace-operator-64bf9778cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-674cbfbd9d to 1

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-674cbfbd9d to 1
(x10)

assisted-installer

default-scheduler

assisted-installer-controller-mqwls

FailedScheduling

no nodes available to schedule pods

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-677db989d6 to 1

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-d64cfc9db to 1
(x10)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-854648ff6d

FailedCreate

Error creating: pods "package-server-manager-854648ff6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-86d6d77c7c to 1
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-68bd585b to 1

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-7d9c49f57b to 1
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-86d6d77c7c

FailedCreate

Error creating: pods "cluster-image-registry-operator-86d6d77c7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-64488f9d78 to 1
(x9)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68bd585b

FailedCreate

Error creating: pods "kube-apiserver-operator-68bd585b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7d9c49f57b

FailedCreate

Error creating: pods "catalog-operator-7d9c49f57b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-d64cfc9db

FailedCreate

Error creating: pods "olm-operator-d64cfc9db-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x5)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

FailedCreate

Error creating: pods "cluster-baremetal-operator-5cdb4c5598-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished
(x10)

openshift-ingress-operator

replicaset-controller

ingress-operator-677db989d6

FailedCreate

Error creating: pods "ingress-operator-677db989d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-config-operator

replicaset-controller

openshift-config-operator-64488f9d78

FailedCreate

Error creating: pods "openshift-config-operator-64488f9d78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

kube-system

Required control plane pods have been created

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-5cdb4c5598 to 1
(x11)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-5685fbc7d

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-5685fbc7d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x5)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

FailedCreate

Error creating: pods "cluster-baremetal-operator-5cdb4c5598-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-5cdb4c5598 to 1

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_42cb5421-39bf-4001-aeba-33b0e2aada6f became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_f80f4ddb-1f15-4881-9e8a-8acf02665f82 became leader
(x5)

assisted-installer

default-scheduler

assisted-installer-controller-mqwls

FailedScheduling

no nodes available to schedule pods

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_ae077fb6-5758-4466-8b0d-a50baac69ae4 became leader
(x6)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5c74bfc494

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5c74bfc494-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-86d6d77c7c

FailedCreate

Error creating: pods "cluster-image-registry-operator-86d6d77c7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-ingress-operator

replicaset-controller

ingress-operator-677db989d6

FailedCreate

Error creating: pods "ingress-operator-677db989d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-7f65c457f5

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-7f65c457f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-69b6fc6b88

FailedCreate

Error creating: pods "service-ca-operator-69b6fc6b88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

FailedCreate

Error creating: pods "cluster-baremetal-operator-5cdb4c5598-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

FailedCreate

Error creating: pods "cluster-baremetal-operator-5cdb4c5598-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-d64cfc9db

FailedCreate

Error creating: pods "olm-operator-d64cfc9db-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-854648ff6d

FailedCreate

Error creating: pods "package-server-manager-854648ff6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-799b6db4d7

FailedCreate

Error creating: pods "openshift-apiserver-operator-799b6db4d7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7d9c49f57b

FailedCreate

Error creating: pods "catalog-operator-7d9c49f57b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

FailedCreate

Error creating: pods "cluster-monitoring-operator-674cbfbd9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68bd585b

FailedCreate

Error creating: pods "kube-apiserver-operator-68bd585b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-network-operator

replicaset-controller

network-operator-7c649bf6d4

FailedCreate

Error creating: pods "network-operator-7c649bf6d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-marketplace

replicaset-controller

marketplace-operator-64bf9778cb

FailedCreate

Error creating: pods "marketplace-operator-64bf9778cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-dns-operator

replicaset-controller

dns-operator-589895fbb7

FailedCreate

Error creating: pods "dns-operator-589895fbb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-config-operator

replicaset-controller

openshift-config-operator-64488f9d78

FailedCreate

Error creating: pods "openshift-config-operator-64488f9d78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-8565d84698

FailedCreate

Error creating: pods "openshift-controller-manager-operator-8565d84698-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

FailedCreate

Error creating: pods "cluster-version-operator-745944c6b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-etcd-operator

replicaset-controller

etcd-operator-5884b9cd56

FailedCreate

Error creating: pods "etcd-operator-5884b9cd56-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-authentication-operator

replicaset-controller

authentication-operator-7c6989d6c4

FailedCreate

Error creating: pods "authentication-operator-7c6989d6c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

FailedCreate

Error creating: pods "cluster-node-tuning-operator-66c7586884-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-5685fbc7d

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-5685fbc7d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77899cf6d

FailedCreate

Error creating: pods "cluster-olm-operator-77899cf6d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-ingress-operator

default-scheduler

ingress-operator-677db989d6-tklw9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

SuccessfulCreate

Created pod: cluster-baremetal-operator-5cdb4c5598-nmwjr

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68bd585b

SuccessfulCreate

Created pod: kube-apiserver-operator-68bd585b-qnhrz

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5cdb4c5598

SuccessfulCreate

Created pod: cluster-baremetal-operator-5cdb4c5598-nmwjr

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-86d6d77c7c

SuccessfulCreate

Created pod: cluster-image-registry-operator-86d6d77c7c-kg26q

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-68bd585b-qnhrz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

replicaset-controller

network-operator-7c649bf6d4

SuccessfulCreate

Created pod: network-operator-7c649bf6d4-v4xm9

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-7f65c457f5-bczvd

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-machine-api

default-scheduler

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-5c74bfc494-85z7m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

replicaset-controller

marketplace-operator-64bf9778cb

SuccessfulCreate

Created pod: marketplace-operator-64bf9778cb-q7hrg

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

SuccessfulCreate

Created pod: cluster-monitoring-operator-674cbfbd9d-czm5f
(x7)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-86d7cdfdfb

FailedCreate

Error creating: pods "kube-controller-manager-operator-86d7cdfdfb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-7f65c457f5

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-7f65c457f5-bczvd

openshift-marketplace

default-scheduler

marketplace-operator-64bf9778cb-q7hrg

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5c74bfc494

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-5c74bfc494-85z7m

openshift-image-registry

default-scheduler

cluster-image-registry-operator-86d6d77c7c-kg26q

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-674cbfbd9d

SuccessfulCreate

Created pod: cluster-monitoring-operator-674cbfbd9d-czm5f

openshift-monitoring

default-scheduler

cluster-monitoring-operator-674cbfbd9d-czm5f

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

default-scheduler

network-operator-7c649bf6d4-v4xm9

Scheduled

Successfully assigned openshift-network-operator/network-operator-7c649bf6d4-v4xm9 to master-0

openshift-ingress-operator

replicaset-controller

ingress-operator-677db989d6

SuccessfulCreate

Created pod: ingress-operator-677db989d6-tklw9

openshift-machine-api

default-scheduler

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

default-scheduler

cluster-monitoring-operator-674cbfbd9d-czm5f

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-854648ff6d

SuccessfulCreate

Created pod: package-server-manager-854648ff6d-kr9ft

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77899cf6d

SuccessfulCreate

Created pod: cluster-olm-operator-77899cf6d-cgdkk

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-5685fbc7d-txnh5

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-service-ca-operator

default-scheduler

service-ca-operator-69b6fc6b88-cg9rz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-d64cfc9db-qd6xh

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-799b6db4d7

SuccessfulCreate

Created pod: openshift-apiserver-operator-799b6db4d7-jtbd6

openshift-service-ca-operator

replicaset-controller

service-ca-operator-69b6fc6b88

SuccessfulCreate

Created pod: service-ca-operator-69b6fc6b88-cg9rz

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-d64cfc9db

SuccessfulCreate

Created pod: olm-operator-d64cfc9db-qd6xh

openshift-authentication-operator

default-scheduler

authentication-operator-7c6989d6c4-7w8wf

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7d9c49f57b

SuccessfulCreate

Created pod: catalog-operator-7d9c49f57b-j454x

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-66c7586884-sxqnh

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-799b6db4d7-jtbd6

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-66c7586884-sxqnh

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

SuccessfulCreate

Created pod: cluster-node-tuning-operator-66c7586884-sxqnh

openshift-config-operator

replicaset-controller

openshift-config-operator-64488f9d78

SuccessfulCreate

Created pod: openshift-config-operator-64488f9d78-cb227

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-7d9c49f57b-j454x

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-config-operator

default-scheduler

openshift-config-operator-64488f9d78-cb227

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-5685fbc7d

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-5685fbc7d-txnh5

openshift-authentication-operator

replicaset-controller

authentication-operator-7c6989d6c4

SuccessfulCreate

Created pod: authentication-operator-7c6989d6c4-7w8wf

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-66c7586884

SuccessfulCreate

Created pod: cluster-node-tuning-operator-66c7586884-sxqnh

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-77899cf6d-cgdkk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-854648ff6d-kr9ft

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-dns-operator

default-scheduler

dns-operator-589895fbb7-wqqqr

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-etcd-operator

default-scheduler

etcd-operator-5884b9cd56-lc94h

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-etcd-operator

replicaset-controller

etcd-operator-5884b9cd56

SuccessfulCreate

Created pod: etcd-operator-5884b9cd56-lc94h

openshift-cluster-version

default-scheduler

cluster-version-operator-745944c6b7-fjbl4

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-745944c6b7-fjbl4 to master-0

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-8565d84698-98wdp

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

SuccessfulCreate

Created pod: cluster-version-operator-745944c6b7-fjbl4

openshift-dns-operator

replicaset-controller

dns-operator-589895fbb7

SuccessfulCreate

Created pod: dns-operator-589895fbb7-wqqqr

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-8565d84698

SuccessfulCreate

Created pod: openshift-controller-manager-operator-8565d84698-98wdp

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-86d7cdfdfb

SuccessfulCreate

Created pod: kube-controller-manager-operator-86d7cdfdfb-wb26b

assisted-installer

default-scheduler

assisted-installer-controller-mqwls

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-mqwls to master-0

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-86d7cdfdfb-wb26b

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

assisted-installer

kubelet

assisted-installer-controller-mqwls

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef"

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3"

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Created

Created container: network-operator

assisted-installer

kubelet

assisted-installer-controller-mqwls

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef" in 5.55s (5.55s including waiting). Image size: 687947017 bytes.

assisted-installer

kubelet

assisted-installer-controller-mqwls

Created

Created container: assisted-installer-controller

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" in 5.56s (5.56s including waiting). Image size: 621647686 bytes.

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Started

Started container network-operator

assisted-installer

kubelet

assisted-installer-controller-mqwls

Started

Started container assisted-installer-controller

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_9b251850-05dc-43d5-ab28-2895dc4c657c became leader

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-82nln

openshift-network-operator

kubelet

mtu-prober-82nln

Started

Started container prober

openshift-network-operator

default-scheduler

mtu-prober-82nln

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-82nln to master-0

openshift-network-operator

kubelet

mtu-prober-82nln

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine

openshift-network-operator

kubelet

mtu-prober-82nln

Created

Created container: prober
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-multus

default-scheduler

multus-g6nmq

Scheduled

Successfully assigned openshift-multus/multus-g6nmq to master-0

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-g6nmq

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-g6nmq

openshift-multus

default-scheduler

multus-g6nmq

Scheduled

Successfully assigned openshift-multus/multus-g6nmq to master-0

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-l2bdp

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-xf7kg

openshift-multus

default-scheduler

network-metrics-daemon-l2bdp

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-l2bdp to master-0

openshift-multus

default-scheduler

network-metrics-daemon-l2bdp

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-l2bdp to master-0

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-l2bdp

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916"

openshift-multus

default-scheduler

multus-additional-cni-plugins-xf7kg

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-xf7kg to master-0

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-xf7kg

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916"

openshift-multus

default-scheduler

multus-additional-cni-plugins-xf7kg

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-xf7kg to master-0

openshift-multus

kubelet

multus-g6nmq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192"

openshift-multus

kubelet

multus-g6nmq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192"

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-8d675b596 to 1

openshift-multus

default-scheduler

multus-admission-controller-8d675b596-mmqbs

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulCreate

Created pod: multus-admission-controller-8d675b596-mmqbs

openshift-multus

default-scheduler

multus-admission-controller-8d675b596-mmqbs

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-8d675b596 to 1

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulCreate

Created pod: multus-admission-controller-8d675b596-mmqbs

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916" in 2.994s (2.994s including waiting). Image size: 528946249 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916" in 2.994s (2.994s including waiting). Image size: 528946249 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245"

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245"

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245" in 6.003s (6.003s including waiting). Image size: 683169303 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245" in 6.003s (6.003s including waiting). Image size: 683169303 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7"

openshift-ovn-kubernetes

default-scheduler

ovnkube-control-plane-66b55d57d-mc46k

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-mc46k to master-0

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-rqhcv

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-rqhcv

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-rqhcv to master-0

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-66b55d57d to 1

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-66b55d57d

SuccessfulCreate

Created pod: ovnkube-control-plane-66b55d57d-mc46k

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-network-diagnostics

default-scheduler

network-check-source-7c67b67d47-88mpr

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-diagnostics

replicaset-controller

network-check-source-7c67b67d47

SuccessfulCreate

Created pod: network-check-source-7c67b67d47-88mpr

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-7c67b67d47 to 1

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Created

Created container: kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-fr4qr

openshift-multus

kubelet

multus-g6nmq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" in 15.086s (15.086s including waiting). Image size: 1238047254 bytes.

openshift-multus

kubelet

multus-g6nmq

Created

Created container: kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Started

Started container kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0"

openshift-multus

kubelet

multus-g6nmq

Created

Created container: kube-multus

openshift-multus

kubelet

multus-g6nmq

Started

Started container kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0"

openshift-network-diagnostics

default-scheduler

network-check-target-fr4qr

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-fr4qr to master-0

openshift-multus

kubelet

multus-g6nmq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" in 15.086s (15.086s including waiting). Image size: 1238047254 bytes.

openshift-multus

kubelet

multus-g6nmq

Started

Started container kube-multus

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7"

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7" in 4.432s (4.432s including waiting). Image size: 411585608 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7" in 4.432s (4.432s including waiting). Image size: 411585608 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7" in 1.288s (1.288s including waiting). Image size: 407347126 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7" in 1.288s (1.288s including waiting). Image size: 407347126 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a"

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0"

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a"

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-kpsm4

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: routeoverride-cni

openshift-network-node-identity

default-scheduler

network-node-identity-kpsm4

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-kpsm4 to master-0

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container whereabouts-cni-bincopy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" in 15.641s (15.641s including waiting). Image size: 1637445817 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" in 15.904s (15.905s including waiting). Image size: 1637445817 bytes.

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Created

Created container: approver

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Started

Started container ovnkube-cluster-manager

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Started

Started container webhook

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: ovn-controller

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-66b55d57d-mc46k became leader

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Created

Created container: webhook

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" in 12.699s (12.699s including waiting). Image size: 1637445817 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" in 12.368s (12.368s including waiting). Image size: 876146500 bytes.

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Started

Started container approver

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a" in 12.368s (12.368s including waiting). Image size: 876146500 bytes.

openshift-network-node-identity

master-0_18d831dd-12b5-4e52-8732-1eee1dbcec09

ovnkube-identity

LeaderElection

master-0_18d831dd-12b5-4e52-8732-1eee1dbcec09 became leader

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container whereabouts-cni
(x7)

openshift-multus

kubelet

network-metrics-daemon-l2bdp

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: ovn-acl-logging

openshift-multus

kubelet

multus-additional-cni-plugins-xf7kg

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container nbdb
(x7)

openshift-multus

kubelet

network-metrics-daemon-l2bdp

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: northd
(x18)

openshift-multus

kubelet

network-metrics-daemon-l2bdp

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x18)

openshift-multus

kubelet

network-metrics-daemon-l2bdp

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-rqhcv

Started

Started container sbdb

default

ovnkube-csr-approver-controller

csr-8scmp

CSRApproved

CSR "csr-8scmp" has been approved

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-rqhcv

default

ovnk-controlplane

master-0

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0]

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-x9v76

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-x9v76 to master-0

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-x9v76

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Started

Started container sbdb
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-fjbl4

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-ovn-kubernetes

kubelet

ovnkube-node-x9v76

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine
(x7)

openshift-network-diagnostics

kubelet

network-check-target-fr4qr

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-qwzgb" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]
(x18)

openshift-network-diagnostics

kubelet

network-check-target-fr4qr

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-jcs42

CSRApproved

CSR "csr-jcs42" has been approved

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-7d9c49f57b-j454x

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-j454x to master-0

openshift-dns-operator

default-scheduler

dns-operator-589895fbb7-wqqqr

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-589895fbb7-wqqqr to master-0

openshift-service-ca-operator

multus

service-ca-operator-69b6fc6b88-cg9rz

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-66c7586884-sxqnh

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-sxqnh to master-0

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-7f65c457f5-bczvd

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-7f65c457f5-bczvd

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-bczvd to master-0

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953"

openshift-authentication-operator

multus

authentication-operator-7c6989d6c4-7w8wf

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-etcd-operator

default-scheduler

etcd-operator-5884b9cd56-lc94h

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-5884b9cd56-lc94h to master-0

openshift-machine-api

default-scheduler

cluster-baremetal-operator-5cdb4c5598-nmwjr

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-nmwjr to master-0

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-66c7586884-sxqnh

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-sxqnh to master-0

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-d64cfc9db-qd6xh

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qd6xh to master-0

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-854648ff6d-kr9ft

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kr9ft to master-0

openshift-multus

default-scheduler

multus-admission-controller-8d675b596-mmqbs

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-8d675b596-mmqbs to master-0

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-8565d84698-98wdp

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-98wdp to master-0

openshift-monitoring

default-scheduler

cluster-monitoring-operator-674cbfbd9d-czm5f

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-czm5f to master-0

openshift-authentication-operator

default-scheduler

authentication-operator-7c6989d6c4-7w8wf

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-7c6989d6c4-7w8wf to master-0

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-5685fbc7d-txnh5

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-txnh5 to master-0

openshift-multus

default-scheduler

multus-admission-controller-8d675b596-mmqbs

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-8d675b596-mmqbs to master-0

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-5c74bfc494-85z7m

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-85z7m to master-0

openshift-image-registry

default-scheduler

cluster-image-registry-operator-86d6d77c7c-kg26q

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-kg26q to master-0

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba"

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-n8nz9

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-5c74bfc494-85z7m

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-77899cf6d-cgdkk

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-cgdkk to master-0

openshift-ingress-operator

default-scheduler

ingress-operator-677db989d6-tklw9

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-677db989d6-tklw9 to master-0

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-68bd585b-qnhrz

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-85z7m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282"

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab"

openshift-apiserver-operator

multus

openshift-apiserver-operator-799b6db4d7-jtbd6

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-799b6db4d7-jtbd6

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jtbd6 to master-0

openshift-config-operator

default-scheduler

openshift-config-operator-64488f9d78-cb227

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-64488f9d78-cb227 to master-0

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-68bd585b-qnhrz

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qnhrz to master-0

openshift-network-operator

kubelet

iptables-alerter-n8nz9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460"

openshift-network-operator

default-scheduler

iptables-alerter-n8nz9

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-n8nz9 to master-0

openshift-monitoring

default-scheduler

cluster-monitoring-operator-674cbfbd9d-czm5f

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-czm5f to master-0

openshift-marketplace

default-scheduler

marketplace-operator-64bf9778cb-q7hrg

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-64bf9778cb-q7hrg to master-0

openshift-machine-api

default-scheduler

cluster-baremetal-operator-5cdb4c5598-nmwjr

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-nmwjr to master-0

openshift-service-ca-operator

default-scheduler

service-ca-operator-69b6fc6b88-cg9rz

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-69b6fc6b88-cg9rz to master-0

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-86d7cdfdfb-wb26b

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-wb26b to master-0

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-98wdp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b"

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-8565d84698-98wdp

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-86d7cdfdfb-wb26b

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-bczvd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9"

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Failed

Error: ErrImagePull

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-5685fbc7d-txnh5

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes

openshift-cluster-olm-operator

multus

cluster-olm-operator-77899cf6d-cgdkk

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-config-operator

multus

openshift-config-operator-64488f9d78-cb227

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43"

openshift-etcd-operator

multus

etcd-operator-5884b9cd56-lc94h

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783"

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Created

Created container: kube-apiserver-operator

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56": pull QPS exceeded

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Started

Started container kube-apiserver-operator

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-68bd585b-qnhrz_3f9938d9-8c0c-4001-9557-8f4fbfe485a5 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Failed

Error: ImagePullBackOff
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

BackOff

Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.34"
(x4)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready
(x4)

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"
(x4)

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: EvaluationConditionsDetected changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from Unknown to False ("All is well")
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x4)

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x4)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x4)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.34"}]
(x4)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x4)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x4)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x4)

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x4)

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379

default

kubelet

master-0

Starting

Starting kubelet.

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "admission": map[string]any{ +  "pluginConfig": map[string]any{ +  "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +  }, +  }, +  "apiServerArguments": map[string]any{ +  "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "goaway-chance": []any{string("0")}, +  "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +  "send-retry-after-while-not-ready-once": []any{string("true")}, +  "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +  "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +  "shutdown-delay-duration": []any{string("0s")}, +  }, +  "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +  "gracefulTerminationDuration": string("15"), +  "servicesSubnet": string("172.30.0.0/16"), +  "servingInfo": map[string]any{ +  "bindAddress": string("0.0.0.0:6443"), +  "bindNetwork": string("tcp4"), +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  "namedCertificates": []any{ +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resou"...), +  "keyFile": string("/etc/kubernetes/static-pod-resou"...), +  }, +  }, +  },   }

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-network-operator

kubelet

iptables-alerter-n8nz9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460"

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56"

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-bczvd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-98wdp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3"

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-85z7m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Failed

Error: ErrImagePull

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Failed

Error: ErrImagePull

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953": rpc error: code = Canceled desc = copying config: context canceled
(x30)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab": rpc error: code = Canceled desc = copying config: context canceled

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba": rpc error: code = Canceled desc = copying config: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Failed

Error: ErrImagePull

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Failed

Error: ErrImagePull

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-network-diagnostics

multus

network-check-target-fr4qr

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-network-diagnostics

kubelet

network-check-target-fr4qr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-bczvd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9" in 6.487s (6.487s including waiting). Image size: 504623546 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-98wdp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b" in 6.48s (6.48s including waiting). Image size: 507967997 bytes.

openshift-network-diagnostics

kubelet

network-check-target-fr4qr

Created

Created container: network-check-target-container

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-85z7m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" in 6.193s (6.193s including waiting). Image size: 506394574 bytes.

openshift-network-operator

kubelet

iptables-alerter-n8nz9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" in 6.522s (6.522s including waiting). Image size: 582153879 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" in 6.19s (6.19s including waiting). Image size: 448041621 bytes.

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3" in 6.19s (6.19s including waiting). Image size: 506479655 bytes.

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" in 5.879s (5.879s including waiting). Image size: 508888174 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-8565d84698-98wdp_d4e2ab3a-1978-45f9-a409-61d890798f8c became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-86d7cdfdfb-wb26b_d7fa3b8b-d528-4952-a3fa-c8f16da35d4d became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-5685fbc7d-txnh5_22e814e2-e65f-47c0-97d4-f64cf80d8a5a became leader

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Started

Started container csi-snapshot-controller-operator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Created

Created container: copy-catalogd-manifests

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.34"

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "build": map[string]any{ +  "buildDefaults": map[string]any{"resources": map[string]any{}}, +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e95c47e9d"...), +  }, +  }, +  "controllers": []any{ +  string("openshift.io/build"), string("openshift.io/build-config-change"), +  string("openshift.io/builder-rolebindings"), +  string("openshift.io/builder-serviceaccount"), +  string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +  string("openshift.io/deployer-rolebindings"), +  string("openshift.io/deployer-serviceaccount"), +  string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +  string("openshift.io/image-puller-rolebindings"), +  string("openshift.io/image-signature-import"), +  string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +  string("openshift.io/ingress-to-route"), +  string("openshift.io/origin-namespace"), ..., +  }, +  "deployer": map[string]any{ +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52d35a623b"...), +  }, +  }, +  "featureGates": []any{string("BuildCSIVolumes=true")}, +  "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Started

Started container copy-catalogd-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-7f65c457f5-bczvd_cab0d8ac-6257-42aa-b3e5-5298e3a35000 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-57ccdf9b5 to 1

openshift-kube-storage-version-migrator

replicaset-controller

migrator-57ccdf9b5

SuccessfulCreate

Created pod: migrator-57ccdf9b5-5l6h9

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Created

Created container: csi-snapshot-controller-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-diagnostics

kubelet

network-check-target-fr4qr

Started

Started container network-check-target-container

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "extendedArguments": map[string]any{ +  "cluster-cidr": []any{string("10.128.0.0/16")}, +  "cluster-name": []any{string("sno-ppdqs")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +  }, +  "featureGates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +  string("DisableKubeletCloudCredentialProviders=true"), +  string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +  string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +  string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +  string("MultiArchInstallAWS=true"), ..., +  }, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.34"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found",Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.34"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.34"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-7577d6f48 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-7577d6f48

SuccessfulCreate

Created pod: csi-snapshot-controller-7577d6f48-kzjmp

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.34"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreateFailed

Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-7577d6f48-kzjmp

AddedInterface

Add eth0 [10.128.0.29/23] from ovn-kubernetes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-7577d6f48-kzjmp

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kzjmp to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5c74bfc494-85z7m_435896c8-0648-4a2c-8ea9-9fcdd0a4e721 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager

default-scheduler

controller-manager-6f7fd6c796-852rp

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6f7fd6c796-852rp to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x7)

openshift-controller-manager

replicaset-controller

controller-manager-6f7fd6c796

FailedCreate

Error creating: pods "controller-manager-6f7fd6c796-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-kube-storage-version-migrator

default-scheduler

migrator-57ccdf9b5-5l6h9

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5l6h9 to master-0

openshift-kube-storage-version-migrator

multus

migrator-57ccdf9b5-5l6h9

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6f7fd6c796 to 1

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-5l6h9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053"

openshift-controller-manager

replicaset-controller

controller-manager-6f7fd6c796

SuccessfulCreate

Created pod: controller-manager-6f7fd6c796-852rp
(x2)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-852rp

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-dbd867658 to 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6f7fd6c796 to 0 from 1

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5c878ff668 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-dbd867658

SuccessfulCreate

Created pod: route-controller-manager-dbd867658-rkw4l
(x2)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-852rp

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager

default-scheduler

controller-manager-5c878ff668-cbqks

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-5c878ff668

SuccessfulCreate

Created pod: controller-manager-5c878ff668-cbqks

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-6f7fd6c796

SuccessfulDelete

Deleted pod: controller-manager-6f7fd6c796-852rp

openshift-route-controller-manager

default-scheduler

route-controller-manager-dbd867658-rkw4l

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-dbd867658-rkw4l to master-0

openshift-network-operator

kubelet

iptables-alerter-n8nz9

Created

Created container: iptables-alerter

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1" in 2.323s (2.323s including waiting). Image size: 463700811 bytes.

openshift-network-operator

kubelet

iptables-alerter-n8nz9

Started

Started container iptables-alerter

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing
(x3)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-852rp

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x3)

openshift-controller-manager

kubelet

controller-manager-6f7fd6c796-852rp

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc" in 3.224s (3.224s including waiting). Image size: 495064829 bytes.

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-5l6h9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053" in 2.36s (2.36s including waiting). Image size: 443271011 bytes.

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready"
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-5l6h9

Created

Created container: graceful-termination

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-5l6h9

Started

Started container graceful-termination

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5c878ff668 to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-57b874d6cb to 1 from 0

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-5l6h9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-7577d6f48-kzjmp

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-7577d6f48-kzjmp became leader

openshift-controller-manager

default-scheduler

controller-manager-57b874d6cb-w8kbv

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"csi-snapshot-controller" "4.18.34"} {"operator" "4.18.34"}]

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-5l6h9

Started

Started container migrator

openshift-kube-storage-version-migrator

kubelet

migrator-57ccdf9b5-5l6h9

Created

Created container: migrator

openshift-controller-manager

replicaset-controller

controller-manager-57b874d6cb

SuccessfulCreate

Created pod: controller-manager-57b874d6cb-w8kbv
(x2)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.34"
(x2)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.34"

openshift-controller-manager

default-scheduler

controller-manager-5c878ff668-cbqks

FailedScheduling

skip schedule deleting pod: openshift-controller-manager/controller-manager-5c878ff668-cbqks

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-controller-manager

replicaset-controller

controller-manager-5c878ff668

SuccessfulDelete

Deleted pod: controller-manager-5c878ff668-cbqks

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

Started

Started container snapshot-controller

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

Created

Created container: snapshot-controller

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Started

Started container copy-operator-controller-manifests

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Created

Created container: copy-operator-controller-manifests

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager

default-scheduler

controller-manager-57b874d6cb-w8kbv

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-57b874d6cb-w8kbv to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed
(x2)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6" in 2.533s (2.533s including waiting). Image size: 511164376 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing
(x6)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x6)

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-fjbl4

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing
(x6)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953" in 424ms (424ms including waiting). Image size: 513220825 bytes.

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.34"
(x6)

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x6)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x6)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x6)

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-77899cf6d-cgdkk_2cc927c3-5154-4774-88ca-f27d13669f54 became leader
(x6)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found
(x6)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x2)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing
(x6)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x6)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found
(x6)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" in 675ms (675ms including waiting). Image size: 518384455 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.34"

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43" in 624ms (624ms including waiting). Image size: 438654375 bytes.

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Created

Created container: openshift-api

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Started

Started container openshift-api

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7c6989d6c4-7w8wf_13d20797-5cbe-4c37-90cd-4aed9fe9729c became leader

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."),Upgradeable changed from Unknown to True ("All is well")

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-69b6fc6b88-cg9rz_8874ba2a-531d-4cb8-9eeb-18e721a06c4f became leader
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.34"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.34"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5884b9cd56-lc94h_33e5904d-d738-42cd-8df9-efa947fe9147 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" in 400ms (400ms including waiting). Image size: 508544235 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5" in 2.018s (2.018s including waiting). Image size: 495994161 bytes.

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Created

Created container: openshift-config-operator

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab" in 473ms (473ms including waiting). Image size: 512273539 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.34"} {"feature-gates" "4.18.34"}]
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.34"
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.34"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2026-03-07 21:14:48 +0000 UTC AsExpected } {OperatorProgressing False 2026-03-07 21:14:48 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-03-07 21:14:48 +0000 UTC AsExpected }]

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-64488f9d78-cb227_0d4256b2-853e-4248-825f-2006072b9b8c became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Started

Started container openshift-config-operator

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "controlPlane": map[string]any{"replicas": float64(1)}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.34"}]
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.34"

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-799b6db4d7-jtbd6_6b093f97-4589-418c-bcc9-dbeed98aadb9 became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-service-ca

replicaset-controller

service-ca-84bfdbbb7f

SuccessfulCreate

Created pod: service-ca-84bfdbbb7f-h76wh

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-84bfdbbb7f to 1

openshift-service-ca

multus

service-ca-84bfdbbb7f-h76wh

AddedInterface

Add eth0 [10.128.0.33/23] from ovn-kubernetes

openshift-service-ca

default-scheduler

service-ca-84bfdbbb7f-h76wh

Scheduled

Successfully assigned openshift-service-ca/service-ca-84bfdbbb7f-h76wh to master-0

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, }

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found"

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-h76wh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n"

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found")

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentUpdated

Updated Deployment.apps/service-ca -n openshift-service-ca because it changed

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-h76wh

Started

Started container service-ca-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-h76wh

Created

Created container: service-ca-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing
(x5)

openshift-controller-manager

kubelet

controller-manager-57b874d6cb-w8kbv

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-ppdqs")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}},    "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +  "serviceServingCert": map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +  },    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server"
(x45)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.34"

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-84bfdbbb7f-h76wh_ca42fa31-9684-4efd-84bd-2bed92783b4d became leader

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-gj8n9" has been approved

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-gj8n9" is created for OpenShiftAuthenticatorCertRequester

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "required configmap/serviceaccount-ca has changed"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-operator-controller

default-scheduler

operator-controller-controller-manager-6598bfb6c4-mlxbw

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-mlxbw to master-0

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-catalogd

replicaset-controller

catalogd-controller-manager-7f8b8b6f4c

SuccessfulCreate

Created pod: catalogd-controller-manager-7f8b8b6f4c-mc2rc

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-7f8b8b6f4c to 1

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-7f8b8b6f4c to 1

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-dbd867658-rkw4l

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-catalogd

replicaset-controller

catalogd-controller-manager-7f8b8b6f4c

SuccessfulCreate

Created pod: catalogd-controller-manager-7f8b8b6f4c-mc2rc

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-dbd867658-rkw4l

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-catalogd

default-scheduler

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mc2rc to master-0

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-6598bfb6c4 to 1

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-6598bfb6c4

SuccessfulCreate

Created pod: operator-controller-controller-manager-6598bfb6c4-mlxbw

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found"

openshift-catalogd

default-scheduler

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mc2rc to master-0

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-7f8b8b6f4c-mc2rc

AddedInterface

Add eth0 [10.128.0.34/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Created

Created container: kube-rbac-proxy

openshift-operator-controller

multus

operator-controller-controller-manager-6598bfb6c4-mlxbw

AddedInterface

Add eth0 [10.128.0.35/23] from ovn-kubernetes

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-7f8b8b6f4c-mc2rc

AddedInterface

Add eth0 [10.128.0.34/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Created

Created container: kube-rbac-proxy

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Created

Created container: manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Started

Started container manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Started

Started container manager

openshift-operator-controller

operator-controller-controller-manager-6598bfb6c4-mlxbw_bb9bb87c-25b0-4614-8a69-c1b3c338a989

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-6598bfb6c4-mlxbw_bb9bb87c-25b0-4614-8a69-c1b3c338a989 became leader
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Created

Created container: manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Started

Started container manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Created

Created container: manager

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well"

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-mc2rc_fd1f5842-85ab-4521-b644-a1da63848e8f

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-mc2rc_fd1f5842-85ab-4521-b644-a1da63848e8f became leader

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-mc2rc_fd1f5842-85ab-4521-b644-a1da63848e8f

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-mc2rc_fd1f5842-85ab-4521-b644-a1da63848e8f became leader

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret
(x6)

openshift-controller-manager

kubelet

controller-manager-57b874d6cb-w8kbv

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-f49f8b76c to 1 from 0

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7c8cdf56b5

SuccessfulCreate

Created pod: route-controller-manager-7c8cdf56b5-h464s

openshift-route-controller-manager

replicaset-controller

route-controller-manager-dbd867658

SuccessfulDelete

Deleted pod: route-controller-manager-dbd867658-rkw4l

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-dbd867658 to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-7c8cdf56b5 to 1 from 0

openshift-controller-manager

replicaset-controller

controller-manager-57b874d6cb

SuccessfulDelete

Deleted pod: controller-manager-57b874d6cb-w8kbv

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-f877dfd9f to 1

openshift-apiserver

replicaset-controller

apiserver-f877dfd9f

SuccessfulCreate

Created pod: apiserver-f877dfd9f-cnjsr

openshift-apiserver

default-scheduler

apiserver-f877dfd9f-cnjsr

Scheduled

Successfully assigned openshift-apiserver/apiserver-f877dfd9f-cnjsr to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

default-scheduler

controller-manager-f49f8b76c-p7dfh

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

replicaset-controller

controller-manager-f49f8b76c

SuccessfulCreate

Created pod: controller-manager-f49f8b76c-p7dfh

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-57b874d6cb to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-apiserver

kubelet

apiserver-f877dfd9f-cnjsr

FailedMount

MountVolume.SetUp failed for volume "etcd-client" : secret "etcd-client" not found
(x7)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x7)

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"
(x7)

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: secrets \"etcd-client\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml
(x7)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x4)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"
(x7)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x7)

openshift-multus

kubelet

network-metrics-daemon-l2bdp

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing
(x7)

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x7)

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x7)

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x7)

openshift-multus

kubelet

network-metrics-daemon-l2bdp

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7"

openshift-image-registry

multus

cluster-image-registry-operator-86d6d77c7c-kg26q

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70"

openshift-machine-api

multus

cluster-baremetal-operator-5cdb4c5598-nmwjr

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-controller-manager

default-scheduler

controller-manager-f49f8b76c-p7dfh

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-f49f8b76c-p7dfh to master-0

openshift-ingress-operator

multus

ingress-operator-677db989d6-tklw9

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0"

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda"

openshift-dns-operator

multus

dns-operator-589895fbb7-wqqqr

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes
(x2)

openshift-route-controller-manager

default-scheduler

route-controller-manager-7c8cdf56b5-h464s

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-machine-api

multus

cluster-baremetal-operator-5cdb4c5598-nmwjr

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d"

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-66c7586884-sxqnh

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70"

openshift-controller-manager

multus

controller-manager-f49f8b76c-p7dfh

AddedInterface

Add eth0 [10.128.0.38/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-f49f8b76c-p7dfh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5"

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-66c7586884-sxqnh

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d"

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-fjbl4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"
(x4)

openshift-apiserver

kubelet

apiserver-f877dfd9f-cnjsr

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-apiserver

replicaset-controller

apiserver-694d775589

SuccessfulCreate

Created pod: apiserver-694d775589-btnh4

openshift-route-controller-manager

default-scheduler

route-controller-manager-7c8cdf56b5-h464s

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-7c8cdf56b5-h464s to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt
(x4)

openshift-apiserver

kubelet

apiserver-f877dfd9f-cnjsr

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found

openshift-apiserver

default-scheduler

apiserver-694d775589-btnh4

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver

replicaset-controller

apiserver-f877dfd9f

SuccessfulDelete

Deleted pod: apiserver-f877dfd9f-cnjsr

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-f877dfd9f to 0 from 1

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-694d775589 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-apiserver

default-scheduler

apiserver-694d775589-btnh4

Scheduled

Successfully assigned openshift-apiserver/apiserver-694d775589-btnh4 to master-0
(x2)

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-7c8cdf56b5-h464s

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

default-scheduler

apiserver-67cf6dffcb-4z6hx

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-67cf6dffcb-4z6hx to master-0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-67cf6dffcb to 1

openshift-oauth-apiserver

replicaset-controller

apiserver-67cf6dffcb

SuccessfulCreate

Created pod: apiserver-67cf6dffcb-4z6hx

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing
(x19)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Created

Created container: cluster-baremetal-operator

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0" in 9.109s (9.109s including waiting). Image size: 511226810 bytes.

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" in 9.282s (9.282s including waiting). Image size: 677929075 bytes.

openshift-controller-manager

kubelet

controller-manager-f49f8b76c-p7dfh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5" in 9.117s (9.117s including waiting). Image size: 558210153 bytes.

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-apiserver

multus

apiserver-694d775589-btnh4

AddedInterface

Add eth0 [10.128.0.40/23] from ovn-kubernetes

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-fjbl4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" in 9.563s (9.563s including waiting). Image size: 517997625 bytes.

openshift-route-controller-manager

multus

route-controller-manager-7c8cdf56b5-h464s

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-fjbl4

Created

Created container: cluster-version-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" in 9.282s (9.282s including waiting). Image size: 677929075 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda" in 9.117s (9.117s including waiting). Image size: 468263999 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Started

Started container cluster-baremetal-operator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" in 9.117s (9.117s including waiting). Image size: 470822665 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" in 9.117s (9.117s including waiting). Image size: 470822665 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Created

Created container: cluster-baremetal-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Started

Started container cluster-baremetal-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7" in 9.24s (9.24s including waiting). Image size: 548751793 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-qzjmv

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Started

Started container kube-rbac-proxy

openshift-dns

default-scheduler

dns-default-hm77f

Scheduled

Successfully assigned openshift-dns/dns-default-hm77f to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-nmwjr_3dff51af-588e-497b-97d0-cb2e32e503da

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-nmwjr_3dff51af-588e-497b-97d0-cb2e32e503da became leader

openshift-oauth-apiserver

kubelet

apiserver-67cf6dffcb-4z6hx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9"

openshift-oauth-apiserver

multus

apiserver-67cf6dffcb-4z6hx

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Created

Created container: cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Started

Started container cluster-node-tuning-operator

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_ed4e413d-060d-4166-a281-bd2fee302079 became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe became leader

openshift-cluster-node-tuning-operator

default-scheduler

tuned-qzjmv

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-qzjmv to master-0

openshift-cluster-node-tuning-operator

kubelet

tuned-qzjmv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" already present on machine

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b"

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-hm77f

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-qzjmv

openshift-controller-manager

kubelet

controller-manager-f49f8b76c-p7dfh

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-f49f8b76c-p7dfh

Started

Started container controller-manager

openshift-controller-manager

kubelet

controller-manager-f49f8b76c-p7dfh

ProbeError

Readiness probe error: Get "https://10.128.0.38:8443/healthz": dial tcp 10.128.0.38:8443: connect: connection refused body:

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-nmwjr_3dff51af-588e-497b-97d0-cb2e32e503da

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-nmwjr_3dff51af-588e-497b-97d0-cb2e32e503da became leader

openshift-controller-manager

kubelet

controller-manager-f49f8b76c-p7dfh

Unhealthy

Readiness probe failed: Get "https://10.128.0.38:8443/healthz": dial tcp 10.128.0.38:8443: connect: connection refused

openshift-route-controller-manager

kubelet

route-controller-manager-7c8cdf56b5-h464s

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06"
(x63)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-86d6d77c7c-kg26q_d86a9ce2-ccaa-4d70-8203-1ab5a8e2545f became leader

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Started

Started container baremetal-kube-rbac-proxy

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Started

Started container baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Created

Created container: baremetal-kube-rbac-proxy

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Created

Created container: cluster-node-tuning-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

Created

Created container: cluster-image-registry-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

Started

Started container cluster-image-registry-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.41/23] from ovn-kubernetes

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

Created

Created container: dns-operator

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

Started

Started container dns-operator

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

Created

Created container: kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-589895fbb7-wqqqr

Started

Started container kube-rbac-proxy

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-fjbl4

Started

Started container cluster-version-operator

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-f49f8b76c-p7dfh became leader

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Created

Created container: baremetal-kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Created

Created container: kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Started

Started container cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-cluster-node-tuning-operator

default-scheduler

tuned-qzjmv

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-qzjmv to master-0

openshift-cluster-node-tuning-operator

kubelet

tuned-qzjmv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" already present on machine

openshift-cluster-node-tuning-operator

kubelet

tuned-qzjmv

Created

Created container: tuned

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-zhkfm

openshift-cluster-node-tuning-operator

kubelet

tuned-qzjmv

Created

Created container: tuned

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: secrets \"etcd-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-dns

kubelet

dns-default-hm77f

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-ingress

default-scheduler

router-default-79f8cd6fdd-858hg

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

replicaset-controller

router-default-79f8cd6fdd

SuccessfulCreate

Created pod: router-default-79f8cd6fdd-858hg

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-dns

multus

dns-default-hm77f

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-dns

default-scheduler

node-resolver-zhkfm

Scheduled

Successfully assigned openshift-dns/node-resolver-zhkfm to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-dns

kubelet

node-resolver-zhkfm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" already present on machine

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-79f8cd6fdd to 1

openshift-cluster-node-tuning-operator

kubelet

tuned-qzjmv

Started

Started container tuned

openshift-dns

kubelet

dns-default-hm77f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955"

openshift-cluster-node-tuning-operator

kubelet

tuned-qzjmv

Started

Started container tuned

openshift-dns

kubelet

node-resolver-zhkfm

Started

Started container dns-node-resolver

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-dns

kubelet

node-resolver-zhkfm

Created

Created container: dns-node-resolver

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-route-controller-manager

kubelet

route-controller-manager-7c8cdf56b5-h464s

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06" in 2.976s (2.976s including waiting). Image size: 487090672 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

openshift-route-controller-manager

kubelet

route-controller-manager-7c8cdf56b5-h464s

Created

Created container: route-controller-manager

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-7c8cdf56b5-h464s_0ee66154-5039-4e97-b5d6-69749b35966d became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-route-controller-manager

kubelet

route-controller-manager-7c8cdf56b5-h464s

Started

Started container route-controller-manager

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-64655dcbb9 to 1 from 0

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7c8cdf56b5

SuccessfulDelete

Deleted pod: route-controller-manager-7c8cdf56b5-h464s

openshift-controller-manager

replicaset-controller

controller-manager-f49f8b76c

SuccessfulDelete

Deleted pod: controller-manager-f49f8b76c-p7dfh

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-f49f8b76c to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-7c8cdf56b5 to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-route-controller-manager: cause by changes in data.ca-bundle.crt

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-scheduler

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-controller-manager

kubelet

controller-manager-f49f8b76c-p7dfh

Killing

Stopping container controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.34"

openshift-controller-manager

default-scheduler

controller-manager-64655dcbb9-bp4zn

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-7df7f5b8c to 1 from 0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap,data.openshift-route-controller-manager.serving-cert.secret

openshift-controller-manager

replicaset-controller

controller-manager-64655dcbb9

SuccessfulCreate

Created pod: controller-manager-64655dcbb9-bp4zn

openshift-oauth-apiserver

kubelet

apiserver-67cf6dffcb-4z6hx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9" in 6.645s (6.645s including waiting). Image size: 505344964 bytes.
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-dns

kubelet

dns-default-hm77f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955" in 4.838s (4.838s including waiting). Image size: 484175664 bytes.

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b" in 6.69s (6.69s including waiting). Image size: 589379637 bytes.

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7df7f5b8c

SuccessfulCreate

Created pod: route-controller-manager-7df7f5b8c-5rhtx

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Created

Created container: fix-audit-permissions

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Started

Started container fix-audit-permissions

openshift-route-controller-manager

kubelet

route-controller-manager-7c8cdf56b5-h464s

Killing

Stopping container route-controller-manager

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b" already present on machine

openshift-dns

kubelet

dns-default-hm77f

Started

Started container kube-rbac-proxy

openshift-dns

kubelet

dns-default-hm77f

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-hm77f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Started

Started container openshift-apiserver

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine
(x2)

openshift-route-controller-manager

default-scheduler

route-controller-manager-7df7f5b8c-5rhtx

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-dns

kubelet

dns-default-hm77f

Started

Started container dns

openshift-dns

kubelet

dns-default-hm77f

Created

Created container: dns

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Created

Created container: openshift-apiserver

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-controller-manager

default-scheduler

controller-manager-64655dcbb9-bp4zn

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-64655dcbb9-bp4zn to master-0

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing

openshift-oauth-apiserver

kubelet

apiserver-67cf6dffcb-4z6hx

Started

Started container oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-67cf6dffcb-4z6hx

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-67cf6dffcb-4z6hx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-67cf6dffcb-4z6hx

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-67cf6dffcb-4z6hx

Created

Created container: fix-audit-permissions

openshift-controller-manager

kubelet

controller-manager-64655dcbb9-bp4zn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5" already present on machine

openshift-controller-manager

kubelet

controller-manager-64655dcbb9-bp4zn

Started

Started container controller-manager

openshift-kube-scheduler

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

installer-3-master-0

Created

Created container: installer

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Started

Started container openshift-apiserver-check-endpoints

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-3-master-0

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-64655dcbb9-bp4zn became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-controller-manager

multus

controller-manager-64655dcbb9-bp4zn

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Created

Created container: openshift-apiserver-check-endpoints

openshift-controller-manager

kubelet

controller-manager-64655dcbb9-bp4zn

Created

Created container: controller-manager

openshift-route-controller-manager

default-scheduler

route-controller-manager-7df7f5b8c-5rhtx

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-7df7f5b8c-5rhtx to master-0

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"user.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-7df7f5b8c-5rhtx

Started

Started container route-controller-manager

openshift-route-controller-manager

multus

route-controller-manager-7df7f5b8c-5rhtx

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-7df7f5b8c-5rhtx

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-7df7f5b8c-5rhtx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06" already present on machine

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-apiserver

kubelet

apiserver-694d775589-btnh4

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-7df7f5b8c-5rhtx_1a4c48e8-1c3f-4bfb-a4d9-b3ee5d434a35 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.34"}] to [{"operator" "4.18.34"} {"oauth-apiserver" "4.18.34"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.34"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-version

replicaset-controller

cluster-version-operator-745944c6b7

SuccessfulDelete

Deleted pod: cluster-version-operator-745944c6b7-fjbl4

openshift-cluster-version

kubelet

cluster-version-operator-745944c6b7-fjbl4

Killing

Stopping container cluster-version-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-745944c6b7 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.34"}] to [{"operator" "4.18.34"} {"openshift-apiserver" "4.18.34"}]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.34"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-cluster-version

replicaset-controller

cluster-version-operator-8c9c967c7

SuccessfulCreate

Created pod: cluster-version-operator-8c9c967c7-s44f4

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-s44f4

Started

Started container cluster-version-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-8c9c967c7 to 1

openshift-cluster-version

default-scheduler

cluster-version-operator-8c9c967c7-s44f4

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-8c9c967c7-s44f4 to master-0

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-s44f4

Created

Created container: cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-s44f4

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_53406cb4-6985-4771-9a8c-1aee435e93e3 became leader

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-kube-scheduler

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-kube-controller-manager

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/template.openshift.io/v1: 401"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing

openshift-operator-lifecycle-manager

multus

catalog-operator-7d9c49f57b-j454x

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914"

openshift-kube-scheduler

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

multus

package-server-manager-854648ff6d-kr9ft

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9"

openshift-multus

multus

multus-admission-controller-8d675b596-mmqbs

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e"

openshift-multus

multus

multus-admission-controller-8d675b596-mmqbs

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9"

openshift-operator-lifecycle-manager

multus

olm-operator-d64cfc9db-qd6xh

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8"

openshift-multus

multus

network-metrics-daemon-l2bdp

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626"

openshift-marketplace

multus

marketplace-operator-64bf9778cb-q7hrg

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-multus

multus

network-metrics-daemon-l2bdp

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626"

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e"

openshift-monitoring

multus

cluster-monitoring-operator-674cbfbd9d-czm5f

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-monitoring

multus

cluster-monitoring-operator-674cbfbd9d-czm5f

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-scheduler

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Started

Started container kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-route-controller-manager

kubelet

route-controller-manager-7df7f5b8c-5rhtx

Killing

Stopping container route-controller-manager

openshift-controller-manager

replicaset-controller

controller-manager-64655dcbb9

SuccessfulDelete

Deleted pod: controller-manager-64655dcbb9-bp4zn

openshift-controller-manager

kubelet

controller-manager-64655dcbb9-bp4zn

Killing

Stopping container controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-cdf659ffc

SuccessfulCreate

Created pod: route-controller-manager-cdf659ffc-4969h

openshift-controller-manager

default-scheduler

controller-manager-86d86fcf49-hgbkg

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

replicaset-controller

controller-manager-86d86fcf49

SuccessfulCreate

Created pod: controller-manager-86d86fcf49-hgbkg

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-cdf659ffc to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-7df7f5b8c to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap
(x2)

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

(combined from similar events): Scaled up replica set controller-manager-86d86fcf49 to 1 from 0
(x5)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap

openshift-route-controller-manager

default-scheduler

route-controller-manager-cdf659ffc-4969h

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7df7f5b8c

SuccessfulDelete

Deleted pod: route-controller-manager-7df7f5b8c-5rhtx

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Created

Created container: network-metrics-daemon

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-8464df8497

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-8464df8497-lxzml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-8464df8497 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-2ntff" is created for OpenShiftMonitoringClientCertRequester

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626" in 3.228s (3.228s including waiting). Image size: 448828105 bytes.

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-8464df8497 to 1

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-8464df8497

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-8464df8497-lxzml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4."

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626" in 3.228s (3.228s including waiting). Image size: 448828105 bytes.

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" in 3.324s (3.324s including waiting). Image size: 456575686 bytes.

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Created

Created container: network-metrics-daemon

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-zxljd" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Started

Started container cluster-monitoring-operator

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e" in 3.387s (3.387s including waiting). Image size: 484450382 bytes.

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-2ntff" is created for OpenShiftMonitoringClientCertRequester

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-2ntff" has been approved

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-zxljd" has been approved

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914" in 3.495s (3.495s including waiting). Image size: 458126424 bytes.

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Created

Created container: marketplace-operator

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-zxljd" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Started

Started container marketplace-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Created

Created container: cluster-monitoring-operator

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

ProbeError

Readiness probe error: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused body:

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Unhealthy

Readiness probe failed: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" in 3.324s (3.324s including waiting). Image size: 456575686 bytes.

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Started

Started container cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Created

Created container: cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-674cbfbd9d-czm5f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e" in 3.387s (3.387s including waiting). Image size: 484450382 bytes.

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Started

Started container network-metrics-daemon

openshift-controller-manager

default-scheduler

controller-manager-86d86fcf49-hgbkg

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-86d86fcf49-hgbkg to master-0

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-l2bdp

Created

Created container: kube-rbac-proxy

openshift-route-controller-manager

default-scheduler

route-controller-manager-cdf659ffc-4969h

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-cdf659ffc-4969h to master-0

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Started

Started container kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Created

Created container: kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

Created

Created container: route-controller-manager

openshift-controller-manager

kubelet

controller-manager-86d86fcf49-hgbkg

Started

Started container controller-manager

openshift-controller-manager

multus

controller-manager-86d86fcf49-hgbkg

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

kubelet

controller-manager-86d86fcf49-hgbkg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5" already present on machine

openshift-controller-manager

kubelet

controller-manager-86d86fcf49-hgbkg

Created

Created container: controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

Started

Started container route-controller-manager

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-86d86fcf49-hgbkg became leader

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-cdf659ffc-4969h

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-68bd585b-qnhrz_6965e973-8325-4049-baf2-416652151e11 became leader

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-cdf659ffc-4969h_669149d0-e25e-4efa-8e2f-719ef28d2e3f became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-6686554ddc to 1

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-6686554ddc

SuccessfulCreate

Created pod: control-plane-machine-set-operator-6686554ddc-dgjgz

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-6686554ddc-dgjgz

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6686554ddc-dgjgz to master-0

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-6686554ddc-dgjgz

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6686554ddc-dgjgz to master-0

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-6686554ddc

SuccessfulCreate

Created pod: control-plane-machine-set-operator-6686554ddc-dgjgz

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-6686554ddc to 1
(x29)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing

openshift-etcd

kubelet

etcd-master-0-master-0

Killing

Stopping container etcdctl

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" in 9.398s (9.399s including waiting). Image size: 862633255 bytes.

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" in 9.224s (9.224s including waiting). Image size: 862633255 bytes.

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Started

Started container package-server-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Created

Created container: package-server-manager
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-98wdp

Created

Created container: openshift-controller-manager-operator
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-98wdp

Started

Started container openshift-controller-manager-operator

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-8565d84698-98wdp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

ProbeError

Liveness probe error: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused body:

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Unhealthy

Liveness probe failed: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Killing

Container openshift-config-operator failed liveness probe, will be restarted
(x3)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Unhealthy

Liveness probe failed: Get "https://10.128.0.14:8443/healthz": dial tcp 10.128.0.14:8443: connect: connection refused
(x3)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

ProbeError

Liveness probe error: Get "https://10.128.0.14:8443/healthz": dial tcp 10.128.0.14:8443: connect: connection refused body:
(x3)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

Unhealthy

Readiness probe failed: Get "https://10.128.0.14:8443/healthz": dial tcp 10.128.0.14:8443: connect: connection refused
(x4)

openshift-config-operator

kubelet

openshift-config-operator-64488f9d78-cb227

ProbeError

Readiness probe error: Get "https://10.128.0.14:8443/healthz": dial tcp 10.128.0.14:8443: connect: connection refused body:

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x3)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

ProbeError

Liveness probe error: Get "https://10.128.0.13:8443/healthz": dial tcp 10.128.0.13:8443: connect: connection refused body:
(x3)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Unhealthy

Liveness probe failed: Get "https://10.128.0.13:8443/healthz": dial tcp 10.128.0.13:8443: connect: connection refused

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-dgjgz_openshift-machine-api_1ba27b7c-a93d-4d6e-a8f2-ec15903dd00c_0(e26e5f12dcbcb2d223b658f0890fa17b46ab3d6fe5a85d6a3bb1810c111f416b): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-dgjgz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e26e5f12dcbcb2d223b658f0890fa17b46ab3d6fe5a85d6a3bb1810c111f416b" Netns:"/var/run/netns/8a335742-3560-470e-9707-8b2d713ca525" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-dgjgz;K8S_POD_INFRA_CONTAINER_ID=e26e5f12dcbcb2d223b658f0890fa17b46ab3d6fe5a85d6a3bb1810c111f416b;K8S_POD_UID=1ba27b7c-a93d-4d6e-a8f2-ec15903dd00c" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-dgjgz] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-dgjgz/1ba27b7c-a93d-4d6e-a8f2-ec15903dd00c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-dgjgz in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-dgjgz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-dgjgz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-dgjgz_openshift-machine-api_1ba27b7c-a93d-4d6e-a8f2-ec15903dd00c_0(e26e5f12dcbcb2d223b658f0890fa17b46ab3d6fe5a85d6a3bb1810c111f416b): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-dgjgz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e26e5f12dcbcb2d223b658f0890fa17b46ab3d6fe5a85d6a3bb1810c111f416b" Netns:"/var/run/netns/8a335742-3560-470e-9707-8b2d713ca525" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-dgjgz;K8S_POD_INFRA_CONTAINER_ID=e26e5f12dcbcb2d223b658f0890fa17b46ab3d6fe5a85d6a3bb1810c111f416b;K8S_POD_UID=1ba27b7c-a93d-4d6e-a8f2-ec15903dd00c" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-dgjgz] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-dgjgz/1ba27b7c-a93d-4d6e-a8f2-ec15903dd00c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-dgjgz in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-dgjgz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-dgjgz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x2)

openshift-machine-api

multus

control-plane-machine-set-operator-6686554ddc-dgjgz

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes
(x2)

openshift-machine-api

multus

control-plane-machine-set-operator-6686554ddc-dgjgz

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-bczvd

Started

Started container kube-storage-version-migrator-operator

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953" already present on machine

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-bczvd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9" already present on machine

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine
(x2)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Started

Started container authentication-operator
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-7f65c457f5-bczvd

Created

Created container: kube-storage-version-migrator-operator

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-85z7m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine
(x2)

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Created

Created container: authentication-operator
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-85z7m

Created

Created container: kube-scheduler-operator-container
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5c74bfc494-85z7m

Started

Started container kube-scheduler-operator-container
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Started

Started container network-operator

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Created

Created container: network-operator

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

Unhealthy

Readiness probe failed: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

ProbeError

Liveness probe error: Get "https://10.128.0.25:8443/healthz": dial tcp 10.128.0.25:8443: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

ProbeError

Readiness probe error: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

Unhealthy

Liveness probe failed: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

ProbeError

Liveness probe error: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Unhealthy

Readiness probe failed: Get "https://10.128.0.25:8443/healthz": dial tcp 10.128.0.25:8443: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

ProbeError

Readiness probe error: Get "https://10.128.0.25:8443/healthz": dial tcp 10.128.0.25:8443: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Unhealthy

Liveness probe failed: Get "https://10.128.0.25:8443/healthz": dial tcp 10.128.0.25:8443: connect: connection refused

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "
(x2)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Started

Started container catalog-operator

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine
(x2)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Created

Created container: catalog-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Created

Created container: kube-controller-manager-operator
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Started

Started container cluster-olm-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Started

Started container kube-controller-manager-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine
(x2)

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

Created

Created container: olm-operator
(x2)

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

Started

Started container olm-operator

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Started

Started container approver

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1"

openshift-operator-lifecycle-manager

kubelet

olm-operator-d64cfc9db-qd6xh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6" already present on machine
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Created

Created container: service-ca-operator
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Started

Started container service-ca-operator

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Created

Created container: approver

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab" already present on machine
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Created

Created container: cluster-olm-operator

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Started

Started container etcd-operator
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5884b9cd56-lc94h

Created

Created container: etcd-operator
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Created

Created container: openshift-apiserver-operator
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-799b6db4d7-jtbd6

Started

Started container openshift-apiserver-operator

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-69b6fc6b88-cg9rz_c850cc16-ffc9-411e-a3ea-88a0563ef695 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Started

Started container control-plane-machine-set-operator

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-dgjgz_4f60dc70-fd55-4442-8821-23b765f2fbd0

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-dgjgz_4f60dc70-fd55-4442-8821-23b765f2fbd0 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0 I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0307 21:15:21.616257 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0307 21:15:21.616269 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0 F0307 21:16:05.637157 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" in 2.192s (2.192s including waiting). Image size: 470680779 bytes.

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-dgjgz_4f60dc70-fd55-4442-8821-23b765f2fbd0

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-dgjgz_4f60dc70-fd55-4442-8821-23b765f2fbd0 became leader

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Started

Started container control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" in 2.192s (2.192s including waiting). Image size: 470680779 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "
(x2)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Created

Created container: ingress-operator

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-559568b945 to 1

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-664cb58b85

SuccessfulCreate

Created pod: cluster-samples-operator-664cb58b85-fmzk7
(x2)

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Started

Started container ingress-operator

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-69576476f7 to 1

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-559568b945

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-559568b945-pmr9d

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-55d85b7b47

SuccessfulCreate

Created pod: cloud-credential-operator-55d85b7b47-7tb74

openshift-ingress-operator

kubelet

ingress-operator-677db989d6-tklw9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0" already present on machine

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-55d85b7b47 to 1

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-69576476f7 to 1

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-69576476f7

SuccessfulCreate

Created pod: cluster-autoscaler-operator-69576476f7-dqvvb

openshift-cluster-machine-approver

replicaset-controller

machine-approver-955fcfb87

SuccessfulCreate

Created pod: machine-approver-955fcfb87-cwdkv

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-955fcfb87 to 1

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-69576476f7

SuccessfulCreate

Created pod: cluster-autoscaler-operator-69576476f7-dqvvb

openshift-machine-api

replicaset-controller

machine-api-operator-84bf6db4f9

SuccessfulCreate

Created pod: machine-api-operator-84bf6db4f9-t8jw4

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_3e9a014b-38dc-477c-9a8d-0e7feb2cc9f2 became leader

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-84bf6db4f9 to 1

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-664cb58b85 to 1

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-6fbfc8dc8f to 1

openshift-insights

replicaset-controller

insights-operator-8f89dfddd

SuccessfulCreate

Created pod: insights-operator-8f89dfddd-rlx9x

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-8f89dfddd to 1

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-6fbfc8dc8f

SuccessfulCreate

Created pod: cluster-storage-operator-6fbfc8dc8f-v48jn

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-machine-api

replicaset-controller

machine-api-operator-84bf6db4f9

SuccessfulCreate

Created pod: machine-api-operator-84bf6db4f9-t8jw4

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-84bf6db4f9 to 1

openshift-machine-config-operator

replicaset-controller

machine-config-operator-fdb5c78b5

SuccessfulCreate

Created pod: machine-config-operator-fdb5c78b5-rk7q8

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-fdb5c78b5 to 1

openshift-kube-apiserver

multus

installer-1-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Started

Started container installer

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce"

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Created

Created container: installer

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_c096f289-238b-4968-9226-34dd049d459a became leader

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5"

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-machine-config-operator

multus

machine-config-operator-fdb5c78b5-rk7q8

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-cluster-storage-operator

multus

cluster-storage-operator-6fbfc8dc8f-v48jn

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3"

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7"

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cloud-credential-operator

multus

cloud-credential-operator-55d85b7b47-7tb74

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-machine-api

multus

machine-api-operator-84bf6db4f9-t8jw4

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-insights

kubelet

insights-operator-8f89dfddd-rlx9x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821"

openshift-marketplace

kubelet

community-operators-rw59s

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-rw59s

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-rw59s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

multus

community-operators-rw59s

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes

openshift-machine-api

multus

cluster-autoscaler-operator-69576476f7-dqvvb

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf"

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-samples-operator

multus

cluster-samples-operator-664cb58b85-fmzk7

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-insights

multus

insights-operator-8f89dfddd-rlx9x

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-marketplace

multus

redhat-marketplace-z2cc9

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7"

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Created

Created container: kube-rbac-proxy

openshift-machine-api

multus

cluster-autoscaler-operator-69576476f7-dqvvb

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Created

Created container: kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3"

openshift-marketplace

multus

certified-operators-vxpb5

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-vxpb5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

certified-operators-vxpb5

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-vxpb5

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-vxpb5

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-fdltd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

multus

redhat-operators-fdltd

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Started

Started container kube-rbac-proxy

openshift-machine-api

multus

machine-api-operator-84bf6db4f9-t8jw4

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d"

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Started

Started container extract-utilities

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

Created

Created container: machine-config-operator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8"

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-rw59s

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-fdltd

Created

Created container: extract-utilities

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

Started

Started container machine-config-operator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-955fcfb87 to 0 from 1

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

replicaset-controller

machine-approver-955fcfb87

SuccessfulDelete

Deleted pod: machine-approver-955fcfb87-cwdkv

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-marketplace

kubelet

redhat-operators-fdltd

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-fdltd

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-kp74q

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-559568b945

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-559568b945-pmr9d

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-559568b945 to 0 from 1

openshift-operator-lifecycle-manager

package-server-manager-854648ff6d-kr9ft_f2a5daef-b876-48bd-8d1b-bbb1eb726dc6

packageserver-controller-lock

LeaderElection

package-server-manager-854648ff6d-kr9ft_f2a5daef-b876-48bd-8d1b-bbb1eb726dc6 became leader

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-f5bf97fcc to 1

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

requirements not yet checked

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-f5bf97fcc

SuccessfulCreate

Created pod: packageserver-f5bf97fcc-w82vx

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" in 12.57s (12.57s including waiting). Image size: 456374430 bytes.

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" in 12.57s (12.57s including waiting). Image size: 456374430 bytes.

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d" in 12.617s (12.617s including waiting). Image size: 467234714 bytes.

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5" in 12.198s (12.198s including waiting). Image size: 513581866 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf" in 12.223s (12.223s including waiting). Image size: 455416776 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" in 16.953s (16.953s including waiting). Image size: 557426734 bytes.

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Created

Created container: extract-content

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Created

Created container: machine-config-daemon

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7" in 23.834s (23.834s including waiting). Image size: 862197440 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Created

Created container: cluster-cloud-controller-manager

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Started

Started container machine-config-daemon

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Started

Started container machine-api-operator

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-insights

kubelet

insights-operator-8f89dfddd-rlx9x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821" in 23.597s (23.597s including waiting). Image size: 504658657 bytes.

openshift-insights

kubelet

insights-operator-8f89dfddd-rlx9x

Created

Created container: insights-operator

openshift-insights

kubelet

insights-operator-8f89dfddd-rlx9x

Started

Started container insights-operator

openshift-marketplace

kubelet

redhat-operators-fdltd

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-fdltd

Created

Created container: extract-content

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Started

Started container machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7" in 23.834s (23.834s including waiting). Image size: 862197440 bytes.

openshift-marketplace

kubelet

redhat-operators-fdltd

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 22.793s (22.793s including waiting). Image size: 1733328350 bytes.

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 22.785s (22.785s including waiting). Image size: 1229556414 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

Created

Created container: cluster-samples-operator

openshift-operator-lifecycle-manager

kubelet

packageserver-f5bf97fcc-w82vx

Created

Created container: packageserver

openshift-operator-lifecycle-manager

kubelet

packageserver-f5bf97fcc-w82vx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

Started

Started container cluster-storage-operator

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

Created

Created container: cluster-storage-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Created

Created container: cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Started

Started container cluster-autoscaler-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

Started

Started container cluster-samples-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

Created

Created container: cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

Started

Started container cluster-samples-operator-watch

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Created

Created container: cluster-autoscaler-operator

openshift-marketplace

kubelet

certified-operators-vxpb5

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 23.816s (23.816s including waiting). Image size: 1272201949 bytes.

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dqvvb_9faadee9-6910-4095-9d40-0ea6f17f1f9c

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dqvvb_9faadee9-6910-4095-9d40-0ea6f17f1f9c became leader

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Started

Started container cluster-autoscaler-operator

openshift-marketplace

kubelet

certified-operators-vxpb5

Created

Created container: extract-content

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dqvvb_9faadee9-6910-4095-9d40-0ea6f17f1f9c

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dqvvb_9faadee9-6910-4095-9d40-0ea6f17f1f9c became leader

openshift-marketplace

kubelet

community-operators-rw59s

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-rw59s

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-rw59s

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 22.78s (22.78s including waiting). Image size: 1220167376 bytes.

openshift-marketplace

kubelet

certified-operators-vxpb5

Started

Started container extract-content

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Started

Started container machine-approver-controller

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

Started

Started container cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

Created

Created container: cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8" in 23.148s (23.148s including waiting). Image size: 880378279 bytes.

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Created

Created container: machine-approver-controller

openshift-operator-lifecycle-manager

multus

packageserver-f5bf97fcc-w82vx

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

packageserver-f5bf97fcc-w82vx

Started

Started container packageserver

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Started

Started container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Killing

Stopping container kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-955fcfb87-cwdkv

Killing

Stopping container machine-approver-controller

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Started

Started container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Created

Created container: config-sync-controllers
(x2)

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.34"

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-cloud-controller-manager-operator

master-0_3f98235e-9bda-436d-9eea-784778b5718d

cluster-cloud-controller-manager-leader

LeaderElection

master-0_3f98235e-9bda-436d-9eea-784778b5718d became leader

openshift-cloud-controller-manager-operator

master-0_e12acfa9-d490-43fb-870e-cd295b4d4b4b

cluster-cloud-config-sync-leader

LeaderElection

master-0_e12acfa9-d490-43fb-870e-cd295b4d4b4b became leader

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.34

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-6fbfc8dc8f-v48jn_46cf5517-3e09-4ea0-951b-65e849d833ea became leader

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")

openshift-operator-lifecycle-manager

kubelet

packageserver-f5bf97fcc-w82vx

Unhealthy

Readiness probe failed: Get "https://10.128.0.67:5443/healthz": dial tcp 10.128.0.67:5443: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

packageserver-f5bf97fcc-w82vx

ProbeError

Readiness probe error: Get "https://10.128.0.67:5443/healthz": dial tcp 10.128.0.67:5443: connect: connection refused body:

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-754bdc9f9d to 1

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Killing

Stopping container kube-rbac-proxy

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 449ms (449ms including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-z2cc9

Started

Started container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Killing

Stopping container config-sync-controllers

openshift-cluster-machine-approver

replicaset-controller

machine-approver-754bdc9f9d

SuccessfulCreate

Created pod: machine-approver-754bdc9f9d-bbz7l

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-559568b945-pmr9d

Killing

Stopping container cluster-cloud-controller-manager

openshift-marketplace

kubelet

community-operators-rw59s

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 428ms (428ms including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

community-operators-rw59s

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

redhat-operators-fdltd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

redhat-operators-fdltd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 387ms (387ms including waiting). Image size: 918278686 bytes.

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-marketplace

kubelet

redhat-operators-fdltd

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-fdltd

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-vxpb5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" in 458ms (458ms including waiting). Image size: 918278686 bytes.

openshift-marketplace

kubelet

certified-operators-vxpb5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec"

openshift-marketplace

kubelet

certified-operators-vxpb5

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-vxpb5

Started

Started container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-marketplace

kubelet

community-operators-rw59s

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-rw59s

Started

Started container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-cluster-machine-approver

master-0_105f3d7d-6a20-42c6-8820-32b2c869b844

cluster-machine-approver-leader

LeaderElection

master-0_105f3d7d-6a20-42c6-8820-32b2c869b844 became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Started

Started container machine-approver-controller

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-7c8df9b496

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Created

Created container: machine-approver-controller

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-7c8df9b496 to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Started

Started container cluster-cloud-controller-manager

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Started

Started container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-machine-config-operator

replicaset-controller

machine-config-controller-ff46b7bdf

SuccessfulCreate

Created pod: machine-config-controller-ff46b7bdf-55p6v

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-ff46b7bdf to 1

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Created

Created container: machine-config-controller

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-machine-config-operator

multus

machine-config-controller-ff46b7bdf-55p6v

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Started

Started container machine-config-controller

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedMount

MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition

openshift-marketplace

kubelet

redhat-operators-fdltd

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-network-diagnostics

kubelet

network-check-source-7c67b67d47-88mpr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine

openshift-network-diagnostics

kubelet

network-check-source-7c67b67d47-88mpr

Created

Created container: check-endpoints

openshift-network-diagnostics

multus

network-check-source-7c67b67d47-88mpr

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-source-7c67b67d47-88mpr

Started

Started container check-endpoints

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedMount

MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition

openshift-ingress

kubelet

router-default-79f8cd6fdd-858hg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032"

openshift-ingress

kubelet

router-default-79f8cd6fdd-858hg

Created

Created container: router

openshift-ingress

kubelet

router-default-79f8cd6fdd-858hg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032" in 2.371s (2.371s including waiting). Image size: 487151732 bytes.

openshift-monitoring

multus

prometheus-operator-admission-webhook-8464df8497-lxzml

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e"

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e"

openshift-ingress

kubelet

router-default-79f8cd6fdd-858hg

Started

Started container router

openshift-monitoring

multus

prometheus-operator-admission-webhook-8464df8497-lxzml

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-xskwx

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Created

Created container: prometheus-operator-admission-webhook

openshift-machine-config-operator

kubelet

machine-config-server-xskwx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Started

Started container prometheus-operator-admission-webhook

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Created

Created container: prometheus-operator-admission-webhook

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de successfully generated (release version: 4.18.34, controller version: d4eb710b17481f468c73d93c876a385253a863e0)

openshift-machine-config-operator

kubelet

machine-config-server-xskwx

Created

Created container: machine-config-server

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e" in 1.367s (1.367s including waiting). Image size: 444572615 bytes.

openshift-machine-config-operator

kubelet

machine-config-server-xskwx

Started

Started container machine-config-server

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc516f6eb3028f5169f1712ac1878d4b591174fd7c363f4ee5aa63162aa01b0e" in 1.367s (1.367s including waiting). Image size: 444572615 bytes.

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-433720038e502b0ee627c5004ac274fa successfully generated (release version: 4.18.34, controller version: d4eb710b17481f468c73d93c876a385253a863e0)

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

Started

Started container prometheus-operator-admission-webhook

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing
(x3)

openshift-ingress

kubelet

router-default-79f8cd6fdd-858hg

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

static-pod-installer

installer-1-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 1

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.34"}] to [{"raw-internal" "4.18.34"} {"kube-apiserver" "1.31.14"} {"operator" "4.18.34"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14"

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.34"
(x4)

openshift-ingress

kubelet

router-default-79f8cd6fdd-858hg

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

default

kubelet

master-0

Starting

Starting kubelet.

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineOSBuilderFailed

Unable to apply 4.18.34: failed to apply machine os builder manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings/machine-os-builder-events": dial tcp 172.30.0.1:443: connect: connection refused
(x3)

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory
(x3)

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID
(x3)

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-664cb58b85-fmzk7

FailedMount

MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-8f89dfddd-rlx9x

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-8f89dfddd-rlx9x

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-8f89dfddd-rlx9x

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedMount

MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-f5bf97fcc-w82vx

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

FailedMount

MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-55d85b7b47-7tb74

FailedMount

MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-xskwx

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-xskwx

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-f5bf97fcc-w82vx

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

FailedMount

MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-86d86fcf49-hgbkg

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

FailedMount

MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-8464df8497-lxzml

FailedMount

MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

FailedMount

MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-fdb5c78b5-rk7q8

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition
(x9)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Unable to apply 4.18.34: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_4ea6295f-263c-4f1c-a28e-e1cd61387183 became leader

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-7hzkm

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-monitoring

replicaset-controller

prometheus-operator-5ff8674d55

SuccessfulCreate

Created pod: prometheus-operator-5ff8674d55-nvm8t

openshift-monitoring

replicaset-controller

prometheus-operator-5ff8674d55

SuccessfulCreate

Created pod: prometheus-operator-5ff8674d55-nvm8t

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-5ff8674d55 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-5ff8674d55 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-6c7fb6b958 to 1

openshift-console-operator

replicaset-controller

console-operator-6c7fb6b958

SuccessfulCreate

Created pod: console-operator-6c7fb6b958-2grlf

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-network-node-identity

master-0_cbaab617-d190-4b6a-8cbc-e19ec9a698f2

ovnkube-identity

LeaderElection

master-0_cbaab617-d190-4b6a-8cbc-e19ec9a698f2 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RequiredPoolsFailed

Unable to apply 4.18.34: error during syncRequiredMachineConfigPools: context deadline exceeded

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing
(x12)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.34} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c}]
(x5)

openshift-ingress-canary

kubelet

ingress-canary-7hzkm

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor
(x5)

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found
(x5)

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: "

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-ingress-canary

multus

ingress-canary-7hzkm

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-7hzkm

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-7hzkm

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-7hzkm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0" already present on machine

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e"

openshift-monitoring

multus

prometheus-operator-5ff8674d55-nvm8t

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-monitoring

multus

prometheus-operator-5ff8674d55-nvm8t

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e"

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e" in 2.347s (2.347s including waiting). Image size: 461569069 bytes.

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bedf16a5f527126e934c37d2f24886de4a54c9bd9d45b18821d02eefd8b5f9e" in 2.347s (2.347s including waiting). Image size: 461569069 bytes.

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-5ff8674d55-nvm8t

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-c8pdj

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

openshift-state-metrics-74cc79fd76

SuccessfulCreate

Created pod: openshift-state-metrics-74cc79fd76-84z7r

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398"

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-74cc79fd76 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-68b88f8cb5 to 1

openshift-monitoring

replicaset-controller

kube-state-metrics-68b88f8cb5

SuccessfulCreate

Created pod: kube-state-metrics-68b88f8cb5-5cj66

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-68b88f8cb5 to 1

openshift-monitoring

replicaset-controller

kube-state-metrics-68b88f8cb5

SuccessfulCreate

Created pod: kube-state-metrics-68b88f8cb5-5cj66

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-c8pdj

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

replicaset-controller

openshift-state-metrics-74cc79fd76

SuccessfulCreate

Created pod: openshift-state-metrics-74cc79fd76-84z7r

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-74cc79fd76 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

multus

openshift-state-metrics-74cc79fd76-84z7r

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432"

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Started

Started container kube-rbac-proxy-main

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

multus

openshift-state-metrics-74cc79fd76-84z7r

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

multus

kube-state-metrics-68b88f8cb5-5cj66

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

multus

kube-state-metrics-68b88f8cb5-5cj66

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Started

Started container kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-c8pdj

Created

Created container: init-textfile

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-c8pdj

Created

Created container: init-textfile

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-9995cd46f to 1

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" in 1.714s (1.714s including waiting). Image size: 417687610 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

thanos-querier-9995cd46f

SuccessfulCreate

Created pod: thanos-querier-9995cd46f-q546g

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-c8pdj

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" in 1.714s (1.714s including waiting). Image size: 417687610 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-k9ktl35kg68d -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-c8pdj

Started

Started container init-textfile

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-9995cd46f to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

replicaset-controller

thanos-querier-9995cd46f

SuccessfulCreate

Created pod: thanos-querier-9995cd46f-q546g

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-k9ktl35kg68d -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

node-exporter-c8pdj

Created

Created container: node-exporter

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432" in 2.299s (2.299s including waiting). Image size: 431974231 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3888758fa24689d4e63dfb78ed97a852c687295adcabdabf8cdc4a2beaa42398" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-74cc79fd76-84z7r

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bececf32872455775075a3d35100302396ca58ae29827b24d7df086d8ac14432" in 2.299s (2.299s including waiting). Image size: 431974231 bytes.

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

node-exporter-c8pdj

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-c8pdj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

node-exporter-c8pdj

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-c8pdj

Created

Created container: node-exporter

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7" in 2.529s (2.529s including waiting). Image size: 440559528 bytes.

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7" in 2.529s (2.529s including waiting). Image size: 440559528 bytes.

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

multus

thanos-querier-9995cd46f-q546g

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88"

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

node-exporter-c8pdj

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-c8pdj

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Started

Started container kube-rbac-proxy-main

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-68b88f8cb5-5cj66

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

multus

thanos-querier-9995cd46f-q546g

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88"

openshift-monitoring

kubelet

node-exporter-c8pdj

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-c8pdj

Started

Started container kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-6fdfc4cfb9 to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-6fdfc4cfb9 to 1

openshift-monitoring

replicaset-controller

metrics-server-6fdfc4cfb9

SuccessfulCreate

Created pod: metrics-server-6fdfc4cfb9-d2n6q

openshift-monitoring

replicaset-controller

monitoring-plugin-6bc88968b6

SuccessfulCreate

Created pod: monitoring-plugin-6bc88968b6-frbh2

openshift-monitoring

replicaset-controller

monitoring-plugin-6bc88968b6

SuccessfulCreate

Created pod: monitoring-plugin-6bc88968b6-frbh2

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-6bc88968b6 to 1

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-6bc88968b6 to 1

openshift-monitoring

replicaset-controller

metrics-server-6fdfc4cfb9

SuccessfulCreate

Created pod: metrics-server-6fdfc4cfb9-d2n6q

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-2ibssd7q5gl39 -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" in 2.437s (2.437s including waiting). Image size: 502712961 bytes.

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" in 2.437s (2.437s including waiting). Image size: 502712961 bytes.

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-2ibssd7q5gl39 -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy

openshift-monitoring

multus

metrics-server-6fdfc4cfb9-d2n6q

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f"

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63"

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63"

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy

openshift-monitoring

multus

monitoring-plugin-6bc88968b6-frbh2

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-monitoring

multus

metrics-server-6fdfc4cfb9-d2n6q

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191"

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f"

openshift-monitoring

multus

monitoring-plugin-6bc88968b6-frbh2

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191"

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191" in 1.877s (1.877s including waiting). Image size: 447810376 bytes.

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f" in 1.807s (1.807s including waiting). Image size: 471430788 bytes.

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" in 1.836s (1.836s including waiting). Image size: 413103557 bytes.

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b42a9b781e6d974a9f6f89286c95c16e18e78d4682420a29ae7c5aa35012191" in 1.877s (1.877s including waiting). Image size: 447810376 bytes.

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" in 1.836s (1.836s including waiting). Image size: 413103557 bytes.

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f" in 1.807s (1.807s including waiting). Image size: 471430788 bytes.

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Created

Created container: metrics-server

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Started

Started container metrics-server

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

monitoring-plugin-6bc88968b6-frbh2

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-6fdfc4cfb9-d2n6q

Created

Created container: metrics-server

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-9995cd46f-q546g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:21.526893 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616174 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:21.616257 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.616269 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:21.632282 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:15:51.632659 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:05.637157 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 1"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1")

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.34} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c}]

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de to Done

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x9)

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered
(x9)

openshift-console-operator

kubelet

console-operator-6c7fb6b958-2grlf

FailedMount

MountVolume.SetUp failed for volume "trusted-ca" : configmap references non-existent config key: ca-bundle.crt

openshift-cloud-controller-manager-operator

master-0_83ab6155-b88e-42bc-8d40-7aec576450e0

cluster-cloud-config-sync-leader

LeaderElection

master-0_83ab6155-b88e-42bc-8d40-7aec576450e0 became leader
(x9)

openshift-monitoring

kubelet

alertmanager-main-0

FailedMount

MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt
(x9)

openshift-monitoring

kubelet

alertmanager-main-0

FailedMount

MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-image-registry

kubelet

node-ca-tztzb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4fda3b54d00ce93f9646411aaa4d337f897e30a70da77288b7f3fdeb5a8b1a6"

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-tztzb

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine
(x9)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt
(x9)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt
(x3)

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Killing

Container machine-config-daemon failed liveness probe, will be restarted

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Started

Started container machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

Created

Created container: machine-config-daemon
(x3)

openshift-machine-config-operator

kubelet

machine-config-daemon-kp74q

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:

openshift-image-registry

kubelet

node-ca-tztzb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4fda3b54d00ce93f9646411aaa4d337f897e30a70da77288b7f3fdeb5a8b1a6" in 2.082s (2.082s including waiting). Image size: 481636484 bytes.

openshift-image-registry

kubelet

node-ca-tztzb

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-tztzb

Started

Started container node-ca

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

master-0_aea63fbd-b2e0-4db9-85c7-7111e2d8143c

cluster-cloud-controller-manager-leader

LeaderElection

master-0_aea63fbd-b2e0-4db9-85c7-7111e2d8143c became leader

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-e6c18aa2631b99bdf4aa94562cc4b1de

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5c74bfc494-85z7m_70cb2bfd-dbc4-4984-9797-361a64b9b4ae became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: s: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0307 21:15:34.814869 1 cmd.go:413] Getting controller reference for node master-0 I0307 21:15:34.830122 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0307 21:15:34.830208 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0307 21:15:34.830225 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0307 21:15:34.834256 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0307 21:16:04.834941 1 cmd.go:524] Getting installer pods for node master-0 F0307 21:16:18.839084 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:34.814869 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:34.830122 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:34.830208 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:34.830225 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:34.834256 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:16:04.834941 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:18.839084 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-86d7cdfdfb-wb26b_2d8184a6-8566-4049-8fb6-404e74a90d05 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:35.413482 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:35.430847 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:35.430922 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:35.430934 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:35.510907 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0307 21:15:59.521235 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0307 21:16:19.515151 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0307 21:16:39.512316 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0307 21:16:53.516873 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:16:53.517040 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

installer errors: installer: etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0307 21:15:35.413482 1 cmd.go:413] Getting controller reference for node master-0 I0307 21:15:35.430847 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0307 21:15:35.430922 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0307 21:15:35.430934 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0307 21:15:35.510907 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0307 21:15:59.521235 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0307 21:16:19.515151 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0307 21:16:39.512316 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0307 21:16:53.516873 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0307 21:16:53.517040 1 cmd.go:109] timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:35.413482 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:35.430847 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:35.430922 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:35.430934 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:35.510907 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0307 21:15:59.521235 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0307 21:16:19.515151 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0307 21:16:39.512316 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0307 21:16:53.516873 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:16:53.517040 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_b030d203-cf82-44ab-99af-6767ebf468ab became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-kube-controller-manager

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-2hhhs

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-2hhhs

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

multus

installer-4-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Started

Started container kube-multus-additional-cni-plugins

openshift-kube-controller-manager

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Created

Created container: installer

openshift-monitoring

replicaset-controller

telemeter-client-69ccf66766

SuccessfulCreate

Created pod: telemeter-client-69ccf66766-q79sx

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-monitoring

replicaset-controller

telemeter-client-69ccf66766

SuccessfulCreate

Created pod: telemeter-client-69ccf66766-q79sx

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-69ccf66766 to 1

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-69ccf66766 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

multus

telemeter-client-69ccf66766-q79sx

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-monitoring

multus

telemeter-client-69ccf66766-q79sx

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f"

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e" in 2.079s (2.079s including waiting). Image size: 480534195 bytes.

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f"

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e" in 2.079s (2.079s including waiting). Image size: 480534195 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7c6989d6c4-7w8wf_610b2a7d-1e75-42f1-aeb4-23638020d122 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory"),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" in 1.347s (1.347s including waiting). Image size: 437909442 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Created

Created container: reload

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory"

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-69ccf66766-q79sx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" in 1.347s (1.347s including waiting). Image size: 437909442 bytes.

openshift-multus

replicaset-controller

multus-admission-controller-56bbfd46b8

SuccessfulCreate

Created pod: multus-admission-controller-56bbfd46b8-6qcf8

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-56bbfd46b8 to 1

openshift-multus

replicaset-controller

multus-admission-controller-56bbfd46b8

SuccessfulCreate

Created pod: multus-admission-controller-56bbfd46b8-6qcf8

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-56bbfd46b8 to 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" already present on machine

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Created

Created container: multus-admission-controller

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Started

Started container kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" already present on machine

openshift-multus

multus

multus-admission-controller-56bbfd46b8-6qcf8

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Created

Created container: kube-rbac-proxy

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Started

Started container multus-admission-controller

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Created

Created container: multus-admission-controller

openshift-multus

multus

multus-admission-controller-56bbfd46b8-6qcf8

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Killing

Stopping container multus-admission-controller

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-8d675b596 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Killing

Stopping container kube-rbac-proxy

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulDelete

Deleted pod: multus-admission-controller-8d675b596-mmqbs

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-multus

replicaset-controller

multus-admission-controller-8d675b596

SuccessfulDelete

Deleted pod: multus-admission-controller-8d675b596-mmqbs

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.43:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.43:8443/apis/user.openshift.io/v1: 401\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-8d675b596 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-8d675b596-mmqbs

Killing

Stopping container multus-admission-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2"

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6c8ccbd44d to 1

openshift-authentication

replicaset-controller

oauth-openshift-6c8ccbd44d

SuccessfulCreate

Created pod: oauth-openshift-6c8ccbd44d-m8w7j
(x2)

openshift-authentication

kubelet

oauth-openshift-6c8ccbd44d-m8w7j

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-session" : secret "v4-0-config-system-session" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required configmap/config has changed"

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 6 identical entries }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 3 identical entries }

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing
(x5)

openshift-authentication

kubelet

oauth-openshift-6c8ccbd44d-m8w7j

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-network-console

replicaset-controller

networking-console-plugin-5cbd49d755

SuccessfulCreate

Created pod: networking-console-plugin-5cbd49d755-2lmd2

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-2lmd2

FailedMount

MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-5cbd49d755 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-2lmd2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba"

openshift-network-console

multus

networking-console-plugin-5cbd49d755-2lmd2

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-authentication

replicaset-controller

oauth-openshift-67c6dd6955

SuccessfulCreate

Created pod: oauth-openshift-67c6dd6955-hbksv

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

replicaset-controller

oauth-openshift-6c8ccbd44d

SuccessfulDelete

Deleted pod: oauth-openshift-6c8ccbd44d-m8w7j

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-6c8ccbd44d to 0 from 1

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-67c6dd6955 to 1 from 0

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-2lmd2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba" in 1.411s (1.411s including waiting). Image size: 446924112 bytes.

openshift-console-operator

multus

console-operator-6c7fb6b958-2grlf

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-console-operator

kubelet

console-operator-6c7fb6b958-2grlf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb"

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-2lmd2

Created

Created container: networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-5cbd49d755-2lmd2

Started

Started container networking-console-plugin

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-2hhhs

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-authentication

multus

oauth-openshift-67c6dd6955-hbksv

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-67c6dd6955-hbksv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-console-operator

kubelet

console-operator-6c7fb6b958-2grlf

Started

Started container console-operator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.34"}] to [{"raw-internal" "4.18.34"} {"kube-scheduler" "1.31.14"} {"operator" "4.18.34"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.34"

openshift-console-operator

kubelet

console-operator-6c7fb6b958-2grlf

Created

Created container: console-operator

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-console-operator

kubelet

console-operator-6c7fb6b958-2grlf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb" in 2.497s (2.497s including waiting). Image size: 512235767 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

static-pod-installer

installer-4-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14"

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.34"}]

openshift-authentication

kubelet

oauth-openshift-67c6dd6955-hbksv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f" in 2.758s (2.758s including waiting). Image size: 481454434 bytes.

openshift-authentication

kubelet

oauth-openshift-67c6dd6955-hbksv

Created

Created container: oauth-openshift

openshift-authentication

kubelet

oauth-openshift-67c6dd6955-hbksv

Started

Started container oauth-openshift

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.34"

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-6c7fb6b958-2grlf_470da466-61a8-49c6-8643-4907d30fdd20 became leader

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-84f57b9877 to 1
(x2)

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console

replicaset-controller

downloads-84f57b9877

SuccessfulCreate

Created pod: downloads-84f57b9877-dwqg9
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready"

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_4c20ab45-2c9d-4fdb-ad99-1cbc63abacbc became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-kube-controller-manager

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.34"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.34"}] to [{"raw-internal" "4.18.34"} {"operator" "4.18.34"} {"kube-controller-manager" "1.31.14"}]

openshift-console

multus

downloads-84f57b9877-dwqg9

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-console

kubelet

downloads-84f57b9877-dwqg9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_4a8f3fdd-8576-433f-bd67-f204984f4802 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-8565d84698-98wdp_0fbe83ce-ea0d-4790-88c5-c8fdf32c3143 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e95c47e9d"...)}},    "controllers": []any{    ... // 8 identical elements    string("openshift.io/deploymentconfig"),    string("openshift.io/image-import"),    strings.Join({ +  "-",    "openshift.io/image-puller-rolebindings",    }, ""),    string("openshift.io/image-signature-import"),    string("openshift.io/image-trigger"),    ... // 2 identical elements    string("openshift.io/origin-namespace"),    string("openshift.io/serviceaccount"),    strings.Join({ +  "-",    "openshift.io/serviceaccount-pull-secrets",    }, ""),    string("openshift.io/templateinstance"),    string("openshift.io/templateinstancefinalizer"),    string("openshift.io/unidling"),    },    "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52d35a623b"...)}},    "featureGates": []any{string("BuildCSIVolumes=true")},    "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3."

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\n- \t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n \t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n \t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n \t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n \t},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required configmap/config has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/config has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced")

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5."

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5884b9cd56-lc94h_c7d33360-7c6e-4ad0-bf39-b41679bdeadd became leader

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapUpdated

Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "etcd" changed from "" to "4.18.34"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "All is well"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-kube-apiserver

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 3 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963"

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963"

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" in 1.687s (1.687s including waiting). Image size: 467539377 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" in 1.687s (1.687s including waiting). Image size: 467539377 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy
(x4)

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.34, 0 replicas available"

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-77899cf6d-cgdkk_8eb92317-ad83-41f1-9a81-69b07dfb9264 became leader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_e71616e6-8a27-421a-91c9-519bcba9ca61 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-controller-manager

replicaset-controller

controller-manager-86d86fcf49

SuccessfulDelete

Deleted pod: controller-manager-86d86fcf49-hgbkg

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-6d8686f75f to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-controller-manager

replicaset-controller

controller-manager-68f988879c

SuccessfulCreate

Created pod: controller-manager-68f988879c-j2dj6

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

Killing

Stopping container route-controller-manager

openshift-controller-manager

kubelet

controller-manager-86d86fcf49-hgbkg

Killing

Stopping container controller-manager

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready")

openshift-route-controller-manager

replicaset-controller

route-controller-manager-cdf659ffc

SuccessfulDelete

Deleted pod: route-controller-manager-cdf659ffc-4969h

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-7ff74686db to 1 from 0

openshift-authentication

kubelet

oauth-openshift-67c6dd6955-hbksv

Killing

Stopping container oauth-openshift

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-67c6dd6955 to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-6d8686f75f

SuccessfulCreate

Created pod: route-controller-manager-6d8686f75f-9t2lk

openshift-authentication

replicaset-controller

oauth-openshift-7ff74686db

SuccessfulCreate

Created pod: oauth-openshift-7ff74686db-b9jm5

openshift-console

replicaset-controller

console-64d844fb5f

SuccessfulCreate

Created pod: console-64d844fb5f-9b28j

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-86d86fcf49 to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-68f988879c to 1 from 0

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-64d844fb5f to 1

openshift-authentication

replicaset-controller

oauth-openshift-67c6dd6955

SuccessfulDelete

Deleted pod: oauth-openshift-67c6dd6955-hbksv

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-cdf659ffc to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/config has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:15:34.814869 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:15:34.830122 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:15:34.830208 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:15:34.830225 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:15:34.834256 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:16:04.834941 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:16:18.839084 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 5"

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

ProbeError

Readiness probe error: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused body:

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-cdf659ffc-4969h

Unhealthy

Readiness probe failed: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

kubelet

controller-manager-86d86fcf49-hgbkg

ProbeError

Readiness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused body:

openshift-controller-manager

kubelet

controller-manager-86d86fcf49-hgbkg

Unhealthy

Readiness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-64488f9d78-cb227_fa73947e-92cf-41e3-9f6f-753fc5d0c8a9 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-799b6db4d7-jtbd6_da7ef089-4748-44b9-8dee-c97ad4a5e8f6 became leader

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" in 18.059s (18.059s including waiting). Image size: 605698200 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-console

kubelet

downloads-84f57b9877-dwqg9

Started

Started container download-server

openshift-console

kubelet

downloads-84f57b9877-dwqg9

Created

Created container: download-server

openshift-console

kubelet

downloads-84f57b9877-dwqg9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b" in 38.712s (38.712s including waiting). Image size: 2895821940 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" in 18.059s (18.059s including waiting). Image size: 605698200 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-route-controller-manager

kubelet

route-controller-manager-6d8686f75f-9t2lk

Started

Started container route-controller-manager

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-6d8686f75f-9t2lk

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-6d8686f75f-9t2lk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-6d8686f75f-9t2lk

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-console

kubelet

console-64d844fb5f-9b28j

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8"

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-kube-scheduler

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-5-master-0

Started

Started container installer

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-console

multus

console-64d844fb5f-9b28j

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-6d8686f75f-9t2lk_80c89087-7282-4508-a50b-6aac9aea6a6f became leader

openshift-controller-manager

multus

controller-manager-68f988879c-j2dj6

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes
(x3)

openshift-console

kubelet

downloads-84f57b9877-dwqg9

Unhealthy

Readiness probe failed: Get "http://10.128.0.89:8080/": dial tcp 10.128.0.89:8080: connect: connection refused
(x3)

openshift-console

kubelet

downloads-84f57b9877-dwqg9

ProbeError

Readiness probe error: Get "http://10.128.0.89:8080/": dial tcp 10.128.0.89:8080: connect: connection refused body:

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no openshift controller manager daemon pods available on any node.")

openshift-console

kubelet

console-64d844fb5f-9b28j

Started

Started container console

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-68f988879c-j2dj6 became leader

openshift-console

kubelet

console-64d844fb5f-9b28j

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" in 5.661s (5.661s including waiting). Image size: 633876767 bytes.

openshift-console

kubelet

console-64d844fb5f-9b28j

Created

Created container: console
(x3)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-698d9d45c9 to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-7ff74686db to 0 from 1

openshift-authentication

replicaset-controller

oauth-openshift-7ff74686db

SuccessfulDelete

Deleted pod: oauth-openshift-7ff74686db-b9jm5

openshift-authentication

replicaset-controller

oauth-openshift-698d9d45c9

SuccessfulCreate

Created pod: oauth-openshift-698d9d45c9-5wh7z

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-7f65c457f5-bczvd_aec304aa-ff73-42aa-bbf5-716e0d5256ec became leader

openshift-authentication

multus

oauth-openshift-698d9d45c9-5wh7z

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-698d9d45c9-5wh7z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f" already present on machine

openshift-authentication

kubelet

oauth-openshift-698d9d45c9-5wh7z

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-698d9d45c9-5wh7z

Created

Created container: oauth-openshift

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.75.17:443/healthz\": dial tcp 172.30.75.17:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.34_openshift"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.34"} {"oauth-apiserver" "4.18.34"}] to [{"operator" "4.18.34"} {"oauth-apiserver" "4.18.34"} {"oauth-openshift" "4.18.34_openshift"}]

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Started

Started container approver

openshift-network-node-identity

kubelet

network-node-identity-kpsm4

Created

Created container: approver

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine
(x10)

openshift-console

kubelet

console-64d844fb5f-9b28j

Unhealthy

Startup probe failed: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914" already present on machine

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Started

Started container marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-64bf9778cb-q7hrg

Created

Created container: marketplace-operator

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Started

Started container manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Started

Started container manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc" already present on machine

openshift-operator-controller

kubelet

operator-controller-controller-manager-6598bfb6c4-mlxbw

Created

Created container: manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Started

Started container manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Created

Created container: manager

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-7f8b8b6f4c-mc2rc

Created

Created container: manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-7c8df9b496-wp42j

Started

Started container config-sync-controllers

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-68f988879c-j2dj6

Started

Started container controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-68f988879c-j2dj6

Created

Created container: controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-68f988879c-j2dj6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5" already present on machine
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609" already present on machine
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-7577d6f48-kzjmp_openshift-cluster-storage-operator(7fa7b789-9201-493e-a96d-484a2622301a)

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Started

Started container machine-approver-controller

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Started

Started container ovnkube-cluster-manager

openshift-cluster-machine-approver

kubelet

machine-approver-754bdc9f9d-bbz7l

Created

Created container: machine-approver-controller

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-66b55d57d-mc46k

Created

Created container: ovnkube-cluster-manager

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

EtcdEndpointsErrorUpdatingStatus

Timeout: request did not complete within requested timeout - context deadline exceeded

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" already present on machine

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1" already present on machine

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Started

Started container control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Started

Started container control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-6686554ddc-dgjgz

Created

Created container: control-plane-machine-set-operator

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Started

Started container package-server-manager

openshift-operator-lifecycle-manager

kubelet

package-server-manager-854648ff6d-kr9ft

Created

Created container: package-server-manager
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

Created

Created container: snapshot-controller
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1" already present on machine
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-7577d6f48-kzjmp

Started

Started container snapshot-controller
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:49764->127.0.0.1:10357: read: connection reset by peer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container cluster-policy-controller failed startup probe, will be restarted

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:49764->127.0.0.1:10357: read: connection reset by peer body:

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretUpdated

Updated Secret/v4-0-config-system-session -n openshift-authentication because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

installer errors: installer: out exceeded while awaiting headers) I0307 21:24:16.787408 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/service-ca-4: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps service-ca-4) I0307 21:24:31.052589 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/service-ca-4: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/service-ca-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0307 21:24:45.054031 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-4-master-0.189aac2353c0a550.b6f5515f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-4-master-0,UID:72b4d517-f9c1-4fb2-9217-bd02b6838b07,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 4: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/service-ca-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:31.05267848 +0000 UTC m=+87.441074066,LastTimestamp:2026-03-07 21:24:31.05267848 +0000 UTC m=+87.441074066,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0307 21:24:45.054259 1 cmd.go:109] failed to copy: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/service-ca-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key" ... I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 5 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 5 triggered by "required secret/service-account-private-key has changed"

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5cdb4c5598-nmwjr_openshift-machine-api(a61a736a-66e5-4ca1-a8a7-088cf73cfcce)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5cdb4c5598-nmwjr_openshift-machine-api(a61a736a-66e5-4ca1-a8a7-088cf73cfcce)
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Created

Created container: cluster-baremetal-operator
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Created

Created container: cluster-baremetal-operator
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Started

Started container cluster-baremetal-operator
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5cdb4c5598-nmwjr

Started

Started container cluster-baremetal-operator
(x6)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-etcd-operator

openshift-cluster-etcd-operator-missingstaticpodcontroller

etcd-operator

MissingStaticPod

static pod lifecycle failure - static pod: "etcd" in namespace: "openshift-etcd" for revision: 2 on node: "master-0" didn't show up, waited: 3m30s
(x4)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

Failed to create installer pod for revision 5 count 1 on node "master-0": Internal error occurred: admission plugin "LimitRanger" failed to complete mutation in 13s
(x4)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreateFailed

Failed to create Pod/installer-5-master-0 -n openshift-kube-controller-manager: Internal error occurred: admission plugin "LimitRanger" failed to complete mutation in 13s
(x4)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreateFailed

Failed to create Pod/installer-5-retry-1-master-0 -n openshift-kube-scheduler: Internal error occurred: admission plugin "LimitRanger" failed to complete mutation in 13s
(x4)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

Failed to create installer pod for revision 5 count 1 on node "master-0": Internal error occurred: admission plugin "LimitRanger" failed to complete mutation in 13s

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

Failed to create installer pod for revision 5 count 1 on node "master-0": pods "installer-5-master-0" is forbidden: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreateFailed

Failed to create Pod/installer-5-master-0 -n openshift-kube-controller-manager: pods "installer-5-master-0" is forbidden: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

multus

installer-5-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-5-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-controller-manager

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-controller-manager

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-5-retry-1-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-5-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-5-retry-1-master-0

Created

Created container: installer

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe stopped leading

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_53406cb4-6985-4771-9a8c-1aee435e93e3 stopped leading

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-68f988879c-j2dj6 became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-sxqnh_71c81600-e49d-452a-9ff2-7347b409cefe stopped leading

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-69b6fc6b88-cg9rz_c850cc16-ffc9-411e-a3ea-88a0563ef695 stopped leading

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Created

Created container: cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Created

Created container: cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70" already present on machine

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" already present on machine

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Created

Created container: cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Created

Created container: cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Started

Started container cluster-autoscaler-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Started

Started container kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Created

Created container: kube-apiserver-operator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Created

Created container: cluster-olm-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-69576476f7-dqvvb

Started

Started container cluster-autoscaler-operator

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Created

Created container: machine-config-controller

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-h76wh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" already present on machine

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Started

Started container cluster-node-tuning-operator

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5" already present on machine

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7" already present on machine

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Started

Started container machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Created

Created container: machine-api-operator

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953" already present on machine

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Started

Started container machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-84bf6db4f9-t8jw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-66c7586884-sxqnh

Started

Started container cluster-node-tuning-operator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77899cf6d-cgdkk

Started

Started container cluster-olm-operator

openshift-machine-config-operator

kubelet

machine-config-controller-ff46b7bdf-55p6v

Started

Started container machine-config-controller

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Created

Created container: authentication-operator

openshift-authentication-operator

kubelet

authentication-operator-7c6989d6c4-7w8wf

Started

Started container authentication-operator

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3" already present on machine

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68bd585b-qnhrz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" already present on machine

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-s44f4

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" already present on machine

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-s44f4

Created

Created container: cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-8c9c967c7-s44f4

Started

Started container cluster-version-operator

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3" already present on machine

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Started

Started container kube-controller-manager-operator

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-h76wh

Started

Started container service-ca-controller

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Started

Started container csi-snapshot-controller-operator

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-5685fbc7d-txnh5

Created

Created container: csi-snapshot-controller-operator

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

Started

Started container cluster-storage-operator

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-6fbfc8dc8f-v48jn

Created

Created container: cluster-storage-operator

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Created

Created container: service-ca-operator

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-86d7cdfdfb-wb26b

Created

Created container: kube-controller-manager-operator

openshift-service-ca-operator

kubelet

service-ca-operator-69b6fc6b88-cg9rz

Started

Started container service-ca-operator

openshift-service-ca

kubelet

service-ca-84bfdbbb7f-h76wh

Created

Created container: service-ca-controller

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Created

Created container: network-operator

openshift-network-operator

kubelet

network-operator-7c649bf6d4-v4xm9

Started

Started container network-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

Started

Started container cluster-image-registry-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-86d6d77c7c-kg26q

Created

Created container: cluster-image-registry-operator

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-66b55d57d-mc46k became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from False to True ("ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOCDownloadsSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get consoleclidownloads.console.openshift.io oc-cli-downloads)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"),Progressing changed from True to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "WorkloadDegraded: \"openshift-controller-manager\" \"deployment\": the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps controller-manager)\nWorkloadDegraded: " to "All is well"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \nAPIServerDeploymentDegraded: deployment/openshift-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps apiserver)\nAPIServerWorkloadDegraded: " to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: ",Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \nAPIServerDeploymentDegraded: deployment/openshift-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps apiserver)\nAPIServerWorkloadDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/openshift-apiserver: could not be retrieved"),Available changed from True to False ("APIServerDeploymentAvailable: deployment/openshift-apiserver: could not be retrieved")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOCDownloadsSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get consoleclidownloads.console.openshift.io oc-cli-downloads)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.34, 0 replicas available")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: "

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "All is well" to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-version-migration-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \nKubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-version-migration-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \nKubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-version-migration-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: "

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "WorkloadDegraded: \"openshift-controller-manager\" \"deployment\": the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps controller-manager)\nWorkloadDegraded: "
(x30)

openshift-console

kubelet

console-64d844fb5f-9b28j

ProbeError

Startup probe error: Get "https://10.128.0.91:8443/health": dial tcp 10.128.0.91:8443: connect: connection refused body:

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-86d7cdfdfb-wb26b_5dd62f42-af14-43c3-b251-2ea70175c0bc became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-sxqnh_65deed5e-90ef-48ff-b458-13b53117a348

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-sxqnh_65deed5e-90ef-48ff-b458-13b53117a348 became leader

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dqvvb_6bf83542-8cb9-44c8-b134-0cc8b862d467

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dqvvb_6bf83542-8cb9-44c8-b134-0cc8b862d467 became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(1c6f1e263aa1f0a5ac95d2a74e2c146c)\nNodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-5685fbc7d-txnh5_3366f793-ac50-4bec-8fd6-e4a897301e5a became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

cluster-autoscaler-operator-69576476f7-dqvvb_6bf83542-8cb9-44c8-b134-0cc8b862d467

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-69576476f7-dqvvb_6bf83542-8cb9-44c8-b134-0cc8b862d467 became leader

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-86d6d77c7c-kg26q_4748dfdb-d185-4898-b524-023a831dd73e became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_dc1916ea-4016-45e2-846f-9cfa7c24c0a5 became leader

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7c6989d6c4-7w8wf_3319c0bc-d734-43be-91a7-b580298b6081 became leader

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io catalogd-manager-rolebinding)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded message changed from "All is well" to "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)"

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_856ce567-d781-4a08-bfaf-cd0cfd49ac3f became leader

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-69b6fc6b88-cg9rz_464be1a0-3f84-45b4-9b80-a55b288d9a56 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-84bfdbbb7f-h76wh_e818ff94-164b-428a-a263-8da23493fba9 became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-66c7586884-sxqnh_65deed5e-90ef-48ff-b458-13b53117a348

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-66c7586884-sxqnh_65deed5e-90ef-48ff-b458-13b53117a348 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from True to False ("All is well")

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-77899cf6d-cgdkk_c8a005a8-aef4-4907-affc-564c6350bed3 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from False to True ("CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nWebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded")

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-68bd585b-qnhrz_9da70829-5bb7-48fd-90ec-17a43fef8594 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nWebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-6fbfc8dc8f-v48jn_326aa3b2-0f01-45a9-873e-a7ba5bbeb68e became leader

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded message changed from "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)" to "All is well"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-version-migration-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "All is well"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: " to "All is well"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io catalogd-manager-rolebinding)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)" to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)"

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.34" image="quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b" architecture="amd64"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to ""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_908beb03-4fcd-4de1-9dc8-51d15924c0a7 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "All is well"

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulDelete

delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container prometheus

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulDelete

delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0307 21:23:04.314774 1 cmd.go:413] Getting controller reference for node master-0 I0307 21:23:04.324583 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0307 21:23:04.324651 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0307 21:23:04.324665 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0307 21:23:04.327621 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0307 21:23:14.618489 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0307 21:23:24.336592 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0307 21:23:54.336905 1 cmd.go:524] Getting installer pods for node master-0 F0307 21:24:08.341008 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-monitoring

kubelet

alertmanager-main-0

Killing

Stopping container alertmanager

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-rhtr2

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-rhtr2

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulDelete

delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

kubelet

alertmanager-main-0

Killing

Stopping container alertmanager

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container prometheus

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulDelete

delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nEtcdMembersDegraded: No unhealthy members found"

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-authentication

kubelet

oauth-openshift-698d9d45c9-5wh7z

Killing

Stopping container oauth-openshift

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

replicaset-controller

oauth-openshift-578bc8c86c

SuccessfulCreate

Created pod: oauth-openshift-578bc8c86c-mczhd

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-578bc8c86c to 1 from 0

openshift-authentication

replicaset-controller

oauth-openshift-698d9d45c9

SuccessfulDelete

Deleted pod: oauth-openshift-698d9d45c9-5wh7z

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-698d9d45c9 to 0 from 1

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-cb4c85d9 to 1

openshift-multus

replicaset-controller

multus-admission-controller-cb4c85d9

SuccessfulCreate

Created pod: multus-admission-controller-cb4c85d9-8ltxz

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)" to "All is well"

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-cb4c85d9 to 1

openshift-multus

replicaset-controller

multus-admission-controller-cb4c85d9

SuccessfulCreate

Created pod: multus-admission-controller-cb4c85d9-8ltxz

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(1c6f1e263aa1f0a5ac95d2a74e2c146c)\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-cert-syncer

openshift-kube-scheduler

static-pod-installer

installer-5-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:23:04.314774 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:23:04.324583 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:23:04.324651 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:23:04.324665 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:23:04.327621 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0307 21:23:14.618489 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0307 21:23:24.336592 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:23:54.336905 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:24:08.341008 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver

multus

installer-4-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-4-retry-1-master-0

Started

Started container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver

kubelet

installer-4-retry-1-master-0

Created

Created container: installer

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigDaemonFailed

Failed to resync 4.18.34 because: failed to apply machine config daemon manifests: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io machine-config-daemon)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-7577d6f48-kzjmp

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-7577d6f48-kzjmp became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ")

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_ca1e99e0-1d3a-48ec-b5ea-f4f5b1b83bb3 became leader

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_cf1f63c7-3a49-451c-82a1-0de209673333 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 5 because static pod is ready

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-network-node-identity

master-0_345811e7-611e-4993-a9a0-0118a967250c

ovnkube-identity

LeaderElection

master-0_345811e7-611e-4993-a9a0-0118a967250c became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_fa321c6e-3927-46fe-b6c9-5a384ebd6716 became leader

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true
(x4)

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ScriptControllerErrorUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_17b6ecd8-413e-4606-972f-5a125fd07a84 became leader
(x4)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

EtcdEndpointsErrorUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"8d4863af-6353-4ff9-a257-5f18254a5d79\", ResourceVersion:\"16008\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 21, 8, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 7, 21, 17, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0030622a0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_18d6efdf-577f-4f26-94e4-73e8fdb476d3 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-cluster-machine-approver

master-0_d391faf6-9b86-4c90-9414-d1dc0786ed96

cluster-machine-approver-leader

LeaderElection

master-0_d391faf6-9b86-4c90-9414-d1dc0786ed96 became leader

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-mc2rc_29f9ca0e-95c3-4723-87c0-50604ca7ba1b

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-mc2rc_29f9ca0e-95c3-4723-87c0-50604ca7ba1b became leader

openshift-catalogd

catalogd-controller-manager-7f8b8b6f4c-mc2rc_29f9ca0e-95c3-4723-87c0-50604ca7ba1b

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7f8b8b6f4c-mc2rc_29f9ca0e-95c3-4723-87c0-50604ca7ba1b became leader

openshift-operator-controller

operator-controller-controller-manager-6598bfb6c4-mlxbw_be511c09-2161-4cc6-b496-5f98dc4a430e

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-6598bfb6c4-mlxbw_be511c09-2161-4cc6-b496-5f98dc4a430e became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0307 21:23:04.314774 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0307 21:23:04.324583 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0307 21:23:04.324651 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0307 21:23:04.324665 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0307 21:23:04.327621 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0307 21:23:14.618489 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0307 21:23:24.336592 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0307 21:23:54.336905 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0307 21:24:08.341008 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ")

openshift-cloud-controller-manager-operator

master-0_b6680b23-8369-458b-a059-c1a242b8e5f1

cluster-cloud-config-sync-leader

LeaderElection

master-0_b6680b23-8369-458b-a059-c1a242b8e5f1 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 3:34.433323 1 cmd.go:639] Writing secret manifest \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key\" ...\nNodeInstallerDegraded: I0307 21:23:34.433520 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:23:48.644225 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:02.699262 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: I0307 21:24:16.955757 1 cmd.go:335] Getting pod configmaps/kube-scheduler-pod-5 -n openshift-kube-scheduler\nNodeInstallerDegraded: W0307 21:24:44.957914 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-5-master-0.189aac234e0f6bc4.6a44ed53 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:installer-5-master-0,UID:96e31400-86e3-46d2-97ee-12fd3e17893a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 5: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,LastTimestamp:2026-03-07 21:24:30.957177796 +0000 UTC m=+87.345584192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/events?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0307 21:24:44.958212 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod-5?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 5 because static pod is ready

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-dgjgz_56d86207-7c57-4254-b97c-46af50cf7d92

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-dgjgz_56d86207-7c57-4254-b97c-46af50cf7d92 became leader

openshift-machine-api

control-plane-machine-set-operator-6686554ddc-dgjgz_56d86207-7c57-4254-b97c-46af50cf7d92

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-6686554ddc-dgjgz_56d86207-7c57-4254-b97c-46af50cf7d92 became leader

openshift-operator-lifecycle-manager

package-server-manager-854648ff6d-kr9ft_e5e4995e-3347-4817-8b12-cab74cb7097f

packageserver-controller-lock

LeaderElection

package-server-manager-854648ff6d-kr9ft_e5e4995e-3347-4817-8b12-cab74cb7097f became leader

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-nmwjr_e51eb681-11c9-4f47-a8ac-939582a33209

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-nmwjr_e51eb681-11c9-4f47-a8ac-939582a33209 became leader

openshift-machine-api

cluster-baremetal-operator-5cdb4c5598-nmwjr_e51eb681-11c9-4f47-a8ac-939582a33209

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5cdb4c5598-nmwjr_e51eb681-11c9-4f47-a8ac-939582a33209 became leader

openshift-cloud-controller-manager-operator

master-0_7a72586f-055c-4b22-a26d-a4d3af77f8c5

cluster-cloud-controller-manager-leader

LeaderElection

master-0_7a72586f-055c-4b22-a26d-a4d3af77f8c5 became leader

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_56041d3f-5f01-4f60-a81b-0c3ff8c1b6d4 became leader

openshift-multus

kubelet

cni-sysctl-allowlist-ds-rhtr2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-authentication

kubelet

oauth-openshift-578bc8c86c-mczhd

Started

Started container oauth-openshift

openshift-multus

kubelet

cni-sysctl-allowlist-ds-rhtr2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192" already present on machine

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" already present on machine

openshift-authentication

kubelet

oauth-openshift-578bc8c86c-mczhd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-rhtr2

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-rhtr2

Started

Started container kube-multus-additional-cni-plugins

openshift-authentication

multus

oauth-openshift-578bc8c86c-mczhd

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-cb4c85d9-8ltxz

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-authentication

kubelet

oauth-openshift-578bc8c86c-mczhd

Created

Created container: oauth-openshift

openshift-multus

kubelet

cni-sysctl-allowlist-ds-rhtr2

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-rhtr2

Started

Started container kube-multus-additional-cni-plugins

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9" already present on machine

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-cb4c85d9-8ltxz

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" already present on machine

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Created

Created container: multus-admission-controller

openshift-multus

replicaset-controller

multus-admission-controller-56bbfd46b8

SuccessfulDelete

Deleted pod: multus-admission-controller-56bbfd46b8-6qcf8

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Started

Started container kube-rbac-proxy

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-56bbfd46b8 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Killing

Stopping container multus-admission-controller

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e43499c79a8b5d642b3376af9595daaf45f91b3f616c93b24155f0d47003963" already present on machine

openshift-multus

kubelet

multus-admission-controller-cb4c85d9-8ltxz

Started

Started container multus-admission-controller

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-56bbfd46b8 to 0 from 1

openshift-multus

replicaset-controller

multus-admission-controller-56bbfd46b8

SuccessfulDelete

Deleted pod: multus-admission-controller-56bbfd46b8-6qcf8

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Killing

Stopping container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-multus

kubelet

multus-admission-controller-56bbfd46b8-6qcf8

Killing

Stopping container multus-admission-controller

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cdb019b6769514c0e92ef92da73e914fbcf6254cc919677ee077c93ce324de0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62eb734bffa3a20fcd96776dd00c9975c23c1068fc012b4104cc4971fdf32e63" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "All is well",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_af8864fa-71c8-4a1c-b3c0-def04054a610 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6f9c4688bb to 1

openshift-console

replicaset-controller

console-6f9c4688bb

SuccessfulCreate

Created pod: console-6f9c4688bb-5k492

openshift-console

multus

console-6f9c4688bb-5k492

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openshift-console

kubelet

console-6f9c4688bb-5k492

Started

Started container console

openshift-console

kubelet

console-6f9c4688bb-5k492

Created

Created container: console

openshift-console

kubelet

console-6f9c4688bb-5k492

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-64d844fb5f to 0 from 1

openshift-console

replicaset-controller

console-64d844fb5f

SuccessfulDelete

Deleted pod: console-64d844fb5f-9b28j

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-78f6d7d749 to 1

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulCreate

Created pod: sushy-emulator-78f6d7d749-xgc79

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-xgc79

Pulling

Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490"

sushy-emulator

multus

sushy-emulator-78f6d7d749-xgc79

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-xgc79

Started

Started container sushy-emulator

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-xgc79

Created

Created container: sushy-emulator

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-xgc79

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" in 6.548s (6.548s including waiting). Image size: 325685589 bytes.

sushy-emulator

deployment-controller

nova-console-poller

ScalingReplicaSet

Scaled up replica set nova-console-poller-849dd7bd7c to 1

sushy-emulator

replicaset-controller

nova-console-poller-849dd7bd7c

SuccessfulCreate

Created pod: nova-console-poller-849dd7bd7c-wlzjd

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

multus

nova-console-poller-849dd7bd7c-wlzjd

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Started

Started container console-poller-119317ba-4c71-49cf-8a6d-c83962c7e7c8

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Created

Created container: console-poller-119317ba-4c71-49cf-8a6d-c83962c7e7c8

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 5.088s (5.088s including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Started

Started container console-poller-d95b73f4-18eb-4a08-9311-cb6ecdad7aa0

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 784ms (784ms including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-849dd7bd7c-wlzjd

Created

Created container: console-poller-d95b73f4-18eb-4a08-9311-cb6ecdad7aa0

sushy-emulator

deployment-controller

nova-console-recorder

ScalingReplicaSet

Scaled up replica set nova-console-recorder-6bd67877d9 to 1

sushy-emulator

replicaset-controller

nova-console-recorder-6bd67877d9

SuccessfulCreate

Created pod: nova-console-recorder-6bd67877d9-cd76q

sushy-emulator

multus

nova-console-recorder-6bd67877d9-cd76q

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Created

Created container: console-recorder-119317ba-4c71-49cf-8a6d-c83962c7e7c8

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 9.528s (9.528s including waiting). Image size: 664134874 bytes.

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Started

Started container console-recorder-119317ba-4c71-49cf-8a6d-c83962c7e7c8

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest"

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 473ms (473ms including waiting). Image size: 664134874 bytes.

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Created

Created container: console-recorder-d95b73f4-18eb-4a08-9311-cb6ecdad7aa0

sushy-emulator

kubelet

nova-console-recorder-6bd67877d9-cd76q

Started

Started container console-recorder-d95b73f4-18eb-4a08-9311-cb6ecdad7aa0

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Created

Created container: util

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.131s (1.131s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Created

Created container: pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Started

Started container pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Created

Created container: extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dkttk

Started

Started container extract

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-cc6c44d98 to 1
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

replicaset-controller

lvms-operator-cc6c44d98

SuccessfulCreate

Created pod: lvms-operator-cc6c44d98-tvcmb
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

replicaset-controller

lvms-operator-cc6c44d98

SuccessfulCreate

Created pod: lvms-operator-cc6c44d98-tvcmb

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-cc6c44d98 to 1

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

multus

lvms-operator-cc6c44d98-tvcmb

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

multus

lvms-operator-cc6c44d98-tvcmb

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.282s (5.282s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Started

Started container manager

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Started

Started container manager

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-cc6c44d98-tvcmb

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.282s (5.282s including waiting). Image size: 238305644 bytes.

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

SuccessfulCreate

Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Started

Started container util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

multus

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Created

Created container: util

openshift-marketplace

job-controller

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f421346

SuccessfulCreate

Created pod: d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908"

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Started

Started container util

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Created

Created container: util

openshift-marketplace

multus

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

openshift-marketplace

job-controller

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a824662b

SuccessfulCreate

Created pod: 0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:2d751ef9609ce7a75d216ef5bee7417f143f8584d795cb8bf9f5df6f7e99c62f"

openshift-marketplace

multus

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

AddedInterface

Add eth0 [10.128.0.114/23] from ovn-kubernetes

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Created

Created container: util

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Started

Started container util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 3.666s (3.666s including waiting). Image size: 108352841 bytes.

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:a73534482ccfeb0a712fe08fad5283873b7a53c4aacd0a1d20cce7661b5924e6"

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:2d751ef9609ce7a75d216ef5bee7417f143f8584d795cb8bf9f5df6f7e99c62f" in 2.509s (2.509s including waiting). Image size: 408551 bytes.

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Started

Started container extract

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Created

Created container: extract

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Created

Created container: pull

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Started

Started container pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5c9dmg

Started

Started container pull

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:a73534482ccfeb0a712fe08fad5283873b7a53c4aacd0a1d20cce7661b5924e6" in 1.646s (1.646s including waiting). Image size: 255828 bytes.

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Started

Started container extract

openshift-marketplace

kubelet

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4s97zb

Created

Created container: extract

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

SuccessfulCreate

Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Created

Created container: pull

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6594fcb745 to 1

openshift-console

replicaset-controller

console-6594fcb745

SuccessfulCreate

Created pod: console-6594fcb745-7lf8n

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Started

Started container extract

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Created

Created container: extract

openshift-marketplace

kubelet

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb4ns

Started

Started container pull

openshift-console

kubelet

console-6594fcb745-7lf8n

Created

Created container: console

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console

multus

console-6594fcb745-7lf8n

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

openshift-marketplace

multus

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

Completed

Job completed

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Created

Created container: util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Started

Started container util

openshift-console

kubelet

console-6594fcb745-7lf8n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

openshift-console

kubelet

console-6594fcb745-7lf8n

Started

Started container console

openshift-marketplace

job-controller

d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f421346

Completed

Job completed

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6"

openshift-marketplace

job-controller

0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a824662b

Completed

Job completed

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Created

Created container: pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.192s (1.192s including waiting). Image size: 4900233 bytes.

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Started

Started container pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Created

Created container: extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Started

Started container extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822r9w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

Completed

Job completed

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsUnknown

requirements not yet checked

openshift-console

replicaset-controller

console-6f9c4688bb

SuccessfulDelete

Deleted pod: console-6f9c4688bb-5k492

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsUnknown

requirements not yet checked

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6f9c4688bb to 0 from 1

openshift-console

kubelet

console-6f9c4688bb-5k492

Killing

Stopping container console

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsNotMet

one or more requirements couldn't be found

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace
(x10)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found
(x10)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-vmqtf

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-vmqtf

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

cert-manager

multus

cert-manager-webhook-6888856db4-vmqtf

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-p74j2

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-webhook-6888856db4-vmqtf

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-p74j2

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-cainjector-5545bd876-p74j2

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-cainjector-5545bd876-p74j2

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

AllRequirementsMet

all requirements found, attempting install
(x12)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found
(x12)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

AllRequirementsMet

all requirements found, attempting install

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-547df9ff8b to 1

metallb-system

replicaset-controller

metallb-operator-controller-manager-547df9ff8b

SuccessfulCreate

Created pod: metallb-operator-controller-manager-547df9ff8b-bpxrb

metallb-system

replicaset-controller

metallb-operator-controller-manager-547df9ff8b

SuccessfulCreate

Created pod: metallb-operator-controller-manager-547df9ff8b-bpxrb

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-547df9ff8b to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-57d6f574cc

SuccessfulCreate

Created pod: metallb-operator-webhook-server-57d6f574cc-8zmmh

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-57d6f574cc to 1

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-57d6f574cc to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-57d6f574cc

SuccessfulCreate

Created pod: metallb-operator-webhook-server-57d6f574cc-8zmmh

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 9.328s (9.328s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 9.328s (9.328s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 8.176s (8.176s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 8.176s (8.176s including waiting). Image size: 319887149 bytes.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Created

Created container: cert-manager-cainjector

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

cert-manager

kubelet

cert-manager-cainjector-5545bd876-p74j2

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-vmqtf

Created

Created container: cert-manager-webhook

metallb-system

multus

metallb-operator-controller-manager-547df9ff8b-bpxrb

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

RequirementsUnknown

requirements not yet checked

metallb-system

multus

metallb-operator-webhook-server-57d6f574cc-8zmmh

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-4flmz

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-4flmz

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225"

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9"

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

RequirementsUnknown

requirements not yet checked

kube-system

cert-manager-cainjector-5545bd876-p74j2_b7d9ce7c-63b5-40c3-a113-d6f598084812

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-5545bd876-p74j2_b7d9ce7c-63b5-40c3-a113-d6f598084812 became leader

metallb-system

multus

metallb-operator-controller-manager-547df9ff8b-bpxrb

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225"

metallb-system

multus

metallb-operator-webhook-server-57d6f574cc-8zmmh

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9"

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-7c4564c96f to 2

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7c4564c96f

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-rcw9j

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-sn8gn

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

AllRequirementsMet

all requirements found, attempting install

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-rcw9j

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-7c4564c96f to 2

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7c4564c96f

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7c4564c96f

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-sn8gn

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

AllRequirementsMet

all requirements found, attempting install

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-7c4564c96f

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

openshift-operators

multus

perses-operator-5bf474d74f-rcw9j

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-nmstate

replicaset-controller

nmstate-operator-75c5dccd6c

SuccessfulCreate

Created pod: nmstate-operator-75c5dccd6c-548z6

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-75c5dccd6c to 1

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

Webhook install failed: conversionWebhook not ready

metallb-system

operator-lifecycle-manager

install-hbdbq

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202602140741" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

openshift-operators

multus

observability-operator-59bdc8b94-sn8gn

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

waiting for install components to report healthy

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-4flmz

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

Webhook install failed: conversionWebhook not ready

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-75c5dccd6c to 1

openshift-nmstate

replicaset-controller

nmstate-operator-75c5dccd6c

SuccessfulCreate

Created pod: nmstate-operator-75c5dccd6c-548z6

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

openshift-operators

multus

observability-operator-59bdc8b94-sn8gn

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

metallb-system

operator-lifecycle-manager

install-hbdbq

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202602140741" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

openshift-operators

multus

perses-operator-5bf474d74f-rcw9j

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-4flmz

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-operator-75c5dccd6c-548z6

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809"

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

multus

nmstate-operator-75c5dccd6c-548z6

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809"

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-fz858

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-fz858
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

waiting for install components to report healthy
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

waiting for install components to report healthy
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

openshift-console

kubelet

console-6f9c4688bb-5k492

Unhealthy

Readiness probe failed: Get "https://10.128.0.106:8443/health": dial tcp 10.128.0.106:8443: connect: connection refused

openshift-console

kubelet

console-6f9c4688bb-5k492

ProbeError

Readiness probe error: Get "https://10.128.0.106:8443/health": dial tcp 10.128.0.106:8443: connect: connection refused body:
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225" in 14.954s (14.954s including waiting). Image size: 462535787 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 13.351s (13.351s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Created

Created container: operator

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Started

Started container operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 13.616s (13.616s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Started

Started container operator

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:0f668226ec5fdc1726e9df3bb807b172040b59313117c8cbed8ade8e730a2225" in 14.954s (14.954s including waiting). Image size: 462535787 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Created

Created container: manager

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Created

Created container: operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 13.616s (13.616s including waiting). Image size: 199215153 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Started

Started container manager

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 13.223s (13.223s including waiting). Image size: 399540002 bytes.

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" in 15.545s (15.545s including waiting). Image size: 555109584 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-sn8gn

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 13.223s (13.223s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 13.085s (13.085s including waiting). Image size: 174807977 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Created

Created container: manager

metallb-system

kubelet

metallb-operator-controller-manager-547df9ff8b-bpxrb

Started

Started container manager

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" in 15.545s (15.545s including waiting). Image size: 555109584 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 13.467s (13.467s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 13.085s (13.085s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 13.351s (13.351s including waiting). Image size: 151103408 bytes.

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809" in 12.434s (12.434s including waiting). Image size: 451492486 bytes.

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:eb1c8c98cba8bfc388bfdd61fc561ddff36727fba65def7521412c52e4020809" in 12.434s (12.434s including waiting). Image size: 451492486 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 13.467s (13.467s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Started

Started container prometheus-operator-admission-webhook

cert-manager

kubelet

cert-manager-545d4d4674-fz858

Started

Started container cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-fz858

Created

Created container: cert-manager-controller

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Started

Started container nmstate-operator

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-545d4d4674-fz858-external-cert-manager-controller became leader

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Created

Created container: prometheus-operator-admission-webhook

metallb-system

metallb-operator-controller-manager-547df9ff8b-bpxrb_f7af7908-eea5-4f98-bd18-b90c8c71ab4f

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-547df9ff8b-bpxrb_f7af7908-eea5-4f98-bd18-b90c8c71ab4f became leader

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Started

Started container prometheus-operator-admission-webhook

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Started

Started container webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Created

Created container: webhook-server

cert-manager

kubelet

cert-manager-545d4d4674-fz858

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

cert-manager

multus

cert-manager-545d4d4674-fz858

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Created

Created container: perses-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Started

Started container perses-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Created

Created container: perses-operator

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Created

Created container: webhook-server

openshift-operators

kubelet

perses-operator-5bf474d74f-rcw9j

Started

Started container perses-operator

metallb-system

metallb-operator-controller-manager-547df9ff8b-bpxrb_f7af7908-eea5-4f98-bd18-b90c8c71ab4f

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-547df9ff8b-bpxrb_f7af7908-eea5-4f98-bd18-b90c8c71ab4f became leader

metallb-system

kubelet

metallb-operator-webhook-server-57d6f574cc-8zmmh

Started

Started container webhook-server

cert-manager

multus

cert-manager-545d4d4674-fz858

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wzsq9

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-7c4564c96f-wxwkq

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-4flmz

Created

Created container: prometheus-operator

cert-manager

kubelet

cert-manager-545d4d4674-fz858

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

cert-manager

kubelet

cert-manager-545d4d4674-fz858

Created

Created container: cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-fz858

Started

Started container cert-manager-controller

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-75c5dccd6c-548z6

Started

Started container nmstate-operator

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

install strategy completed with no errors

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602172216

InstallSucceeded

install strategy completed with no errors

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202602140741

InstallSucceeded

install strategy completed with no errors

metallb-system

kubelet

frr-k8s-9cvbt

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-lnt6b

metallb-system

replicaset-controller

frr-k8s-webhook-server-7f989f654f

SuccessfulCreate

Created pod: frr-k8s-webhook-server-7f989f654f-vnw67

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-7f989f654f to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-9cvbt

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-86ddb6bd46 to 1

metallb-system

replicaset-controller

controller-86ddb6bd46

SuccessfulCreate

Created pod: controller-86ddb6bd46-nx428

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-86ddb6bd46 to 1

metallb-system

replicaset-controller

controller-86ddb6bd46

SuccessfulCreate

Created pod: controller-86ddb6bd46-nx428

metallb-system

kubelet

frr-k8s-9cvbt

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 517826ef-9d9a-4d85-81e1-3be2d1ae428c] does not exist in namespace ""

metallb-system

kubelet

speaker-lnt6b

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "speaker-certs-secret" not found

metallb-system

kubelet

speaker-lnt6b

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "speaker-certs-secret" not found

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-9cvbt

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-7f989f654f to 1

metallb-system

replicaset-controller

frr-k8s-webhook-server-7f989f654f

SuccessfulCreate

Created pod: frr-k8s-webhook-server-7f989f654f-vnw67

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-lnt6b

metallb-system

multus

frr-k8s-webhook-server-7f989f654f-vnw67

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

metallb-system

multus

controller-86ddb6bd46-nx428

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes
(x2)

metallb-system

kubelet

speaker-lnt6b

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"

metallb-system

multus

frr-k8s-webhook-server-7f989f654f-vnw67

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

metallb-system

kubelet

controller-86ddb6bd46-nx428

Started

Started container controller

metallb-system

multus

controller-86ddb6bd46-nx428

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

metallb-system

kubelet

controller-86ddb6bd46-nx428

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

metallb-system

kubelet

controller-86ddb6bd46-nx428

Created

Created container: controller

metallb-system

kubelet

controller-86ddb6bd46-nx428

Started

Started container controller

metallb-system

kubelet

controller-86ddb6bd46-nx428

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"
(x2)

metallb-system

kubelet

speaker-lnt6b

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

kubelet

frr-k8s-9cvbt

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"

metallb-system

kubelet

controller-86ddb6bd46-nx428

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"

metallb-system

kubelet

controller-86ddb6bd46-nx428

Created

Created container: controller

metallb-system

kubelet

controller-86ddb6bd46-nx428

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9"

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-9lvhn

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-786f45cff4 to 1

metallb-system

kubelet

speaker-lnt6b

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-786f45cff4 to 1

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5dcbbd79cf

SuccessfulCreate

Created pod: nmstate-console-plugin-5dcbbd79cf-cbbp5

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5dcbbd79cf to 1

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-9lvhn

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5dcbbd79cf to 1

openshift-nmstate

replicaset-controller

nmstate-webhook-786f45cff4

SuccessfulCreate

Created pod: nmstate-webhook-786f45cff4-lsgfs

openshift-nmstate

replicaset-controller

nmstate-metrics-69594cc75

SuccessfulCreate

Created pod: nmstate-metrics-69594cc75-26sjk

openshift-nmstate

replicaset-controller

nmstate-metrics-69594cc75

SuccessfulCreate

Created pod: nmstate-metrics-69594cc75-26sjk

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5dcbbd79cf

SuccessfulCreate

Created pod: nmstate-console-plugin-5dcbbd79cf-cbbp5

metallb-system

kubelet

speaker-lnt6b

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:02d5ffcd04189eb7328b7a5f79ce5e4cdf09216f2560d702e61e63eb8e2588d9" already present on machine

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-69594cc75 to 1

openshift-nmstate

replicaset-controller

nmstate-webhook-786f45cff4

SuccessfulCreate

Created pod: nmstate-webhook-786f45cff4-lsgfs

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-69594cc75 to 1

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

metallb-system

kubelet

controller-86ddb6bd46-nx428

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5"

openshift-nmstate

multus

nmstate-webhook-786f45cff4-lsgfs

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5c96487ddf to 1

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"
(x7)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

openshift-nmstate

multus

nmstate-metrics-69594cc75-26sjk

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-console-plugin-5dcbbd79cf-cbbp5

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

openshift-console

multus

console-5c96487ddf-5r2nd

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"
(x3)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.34, 1 replicas available"

metallb-system

kubelet

speaker-lnt6b

Started

Started container speaker

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

metallb-system

kubelet

controller-86ddb6bd46-nx428

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.737s (1.737s including waiting). Image size: 465086330 bytes.

metallb-system

kubelet

speaker-lnt6b

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"

metallb-system

kubelet

speaker-lnt6b

Created

Created container: speaker

metallb-system

kubelet

controller-86ddb6bd46-nx428

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.737s (1.737s including waiting). Image size: 465086330 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5"

metallb-system

kubelet

controller-86ddb6bd46-nx428

Started

Started container kube-rbac-proxy

metallb-system

kubelet

controller-86ddb6bd46-nx428

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-86ddb6bd46-nx428

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-lnt6b

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902"

openshift-nmstate

multus

nmstate-metrics-69594cc75-26sjk

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-console

replicaset-controller

console-5c96487ddf

SuccessfulCreate

Created pod: console-5c96487ddf-5r2nd

openshift-nmstate

multus

nmstate-console-plugin-5dcbbd79cf-cbbp5

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5"

openshift-nmstate

multus

nmstate-webhook-786f45cff4-lsgfs

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

metallb-system

kubelet

speaker-lnt6b

Created

Created container: speaker

metallb-system

kubelet

speaker-lnt6b

Started

Started container speaker

openshift-console

kubelet

console-5c96487ddf-5r2nd

Started

Started container console

metallb-system

kubelet

speaker-lnt6b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.028s (1.028s including waiting). Image size: 465086330 bytes.

metallb-system

kubelet

speaker-lnt6b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" in 1.028s (1.028s including waiting). Image size: 465086330 bytes.

openshift-console

kubelet

console-5c96487ddf-5r2nd

Created

Created container: console

metallb-system

kubelet

speaker-lnt6b

Created

Created container: kube-rbac-proxy

openshift-console

kubelet

console-5c96487ddf-5r2nd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db06a0e0308b2e541c7bb2d11517431abb31133b2ce6cb6c34ecf5ef4188a4e8" already present on machine

metallb-system

kubelet

speaker-lnt6b

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-lnt6b

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-lnt6b

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 6.25s (6.25s including waiting). Image size: 498677652 bytes.

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Started

Started container nmstate-console-plugin

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Created

Created container: frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 7.849s (7.849s including waiting). Image size: 662213339 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 5.778s (5.778s including waiting). Image size: 498677652 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5" in 5.513s (5.513s including waiting). Image size: 453887352 bytes.

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-786f45cff4-lsgfs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 5.778s (5.778s including waiting). Image size: 498677652 bytes.

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:0b7639d1c6c6a759c2d100c224c774d3ccd4065f4b299a6ea69a8bfebc7febf5" in 5.513s (5.513s including waiting). Image size: 453887352 bytes.

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 7.672s (7.672s including waiting). Image size: 662213339 bytes.

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

openshift-nmstate

kubelet

nmstate-handler-9lvhn

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 6.25s (6.25s including waiting). Image size: 498677652 bytes.

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container cp-frr-files

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Started

Started container nmstate-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container cp-frr-files

openshift-nmstate

kubelet

nmstate-console-plugin-5dcbbd79cf-cbbp5

Started

Started container nmstate-console-plugin

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 7.849s (7.849s including waiting). Image size: 662213339 bytes.

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Created

Created container: nmstate-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 5.665s (5.665s including waiting). Image size: 498677652 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Created

Created container: frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-7f989f654f-vnw67

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" in 7.672s (7.672s including waiting). Image size: 662213339 bytes.

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:5c00ed4b5d044125b3dc619b01575e86f3955d6549ef398ccc91bbf21ceb6ad5" in 5.665s (5.665s including waiting). Image size: 498677652 bytes.

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container cp-reloader

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-metrics-69594cc75-26sjk

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container reloader

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: reloader

openshift-console

kubelet

console-6594fcb745-7lf8n

Killing

Stopping container console

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container controller

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

openshift-console

replicaset-controller

console-6594fcb745

SuccessfulDelete

Deleted pod: console-6594fcb745-7lf8n

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6594fcb745 to 0 from 1

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container frr

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: frr

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:787be45b5241419b6819676d43325a9030c0e16441918e4a33a44f0380d6b902" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-9cvbt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:d7e76e936159ed04e779a66d421cc3ecc6c82409e8eed924112d9174c3d6aad9" already present on machine

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: frr

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container frr

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container controller

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: controller

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: controller

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-9cvbt

Started

Started container reloader
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: kube-rbac-proxy

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.34, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.34, 2 replicas available"

metallb-system

kubelet

frr-k8s-9cvbt

Created

Created container: kube-rbac-proxy

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-9nzbx

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-9nzbx

openshift-storage

multus

vg-manager-9nzbx

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-storage

multus

vg-manager-9nzbx

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes
(x2)

openshift-storage

kubelet

vg-manager-9nzbx

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-9nzbx

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-9nzbx

Created

Created container: vg-manager
(x2)

openshift-storage

kubelet

vg-manager-9nzbx

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-9nzbx

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-9nzbx

Created

Created container: vg-manager
(x15)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io
(x15)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openstack-operators

kubelet

openstack-operator-index-zt56c

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-zt56c

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-zt56c

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-zt56c

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-zt56c

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-zt56c

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-zt56c

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 963ms (963ms including waiting). Image size: 918631088 bytes.

openstack-operators

kubelet

openstack-operator-index-zt56c

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-zt56c

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-zt56c

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 963ms (963ms including waiting). Image size: 918631088 bytes.
(x10)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-zt56c

Killing

Stopping container registry-server

openstack-operators

kubelet

openstack-operator-index-zt56c

Killing

Stopping container registry-server

openstack-operators

multus

openstack-operator-index-klqvq

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-index-klqvq

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-klqvq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 513ms (513ms including waiting). Image size: 918631088 bytes.

openstack-operators

kubelet

openstack-operator-index-klqvq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 513ms (513ms including waiting). Image size: 918631088 bytes.

openstack-operators

kubelet

openstack-operator-index-klqvq

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-klqvq

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-klqvq

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-klqvq

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-klqvq

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-klqvq

Started

Started container registry-server

openstack-operators

job-controller

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99ab74d

SuccessfulCreate

Created pod: 0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

openstack-operators

job-controller

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99ab74d

SuccessfulCreate

Created pod: 0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

openstack-operators

multus

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openstack-operators

multus

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Created

Created container: util

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Started

Started container util

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Started

Started container util

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Created

Created container: util

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb"

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8" already present on machine

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Started

Started container pull

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Created

Created container: pull

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 729ms (729ms including waiting). Image size: 115773 bytes.

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Created

Created container: pull

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Started

Started container pull

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec" already present on machine

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:da1efd1b58ce237ec2ea1856e07a2da750caf6eb" in 729ms (729ms including waiting). Image size: 115773 bytes.

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Created

Created container: extract

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Started

Started container extract

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Started

Started container extract

openstack-operators

kubelet

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99mmjd5

Created

Created container: extract

openstack-operators

job-controller

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99ab74d

Completed

Job completed

openstack-operators

job-controller

0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99ab74d

Completed

Job completed

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-6f44f7b99f to 1
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed...

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed...
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

replicaset-controller

openstack-operator-controller-init-6f44f7b99f

SuccessfulCreate

Created pod: openstack-operator-controller-init-6f44f7b99f-fplrp

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-6f44f7b99f to 1

openstack-operators

replicaset-controller

openstack-operator-controller-init-6f44f7b99f

SuccessfulCreate

Created pod: openstack-operator-controller-init-6f44f7b99f-fplrp

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:9e10a98495ce6a05e00b09f74eeae9fdac20e29ca647bc46453a9ae72f4fa498"

openstack-operators

multus

openstack-operator-controller-init-6f44f7b99f-fplrp

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-controller-init-6f44f7b99f-fplrp

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:9e10a98495ce6a05e00b09f74eeae9fdac20e29ca647bc46453a9ae72f4fa498"

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:9e10a98495ce6a05e00b09f74eeae9fdac20e29ca647bc46453a9ae72f4fa498" in 4.619s (4.619s including waiting). Image size: 293351753 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:9e10a98495ce6a05e00b09f74eeae9fdac20e29ca647bc46453a9ae72f4fa498" in 4.619s (4.619s including waiting). Image size: 293351753 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-6f44f7b99f-fplrp

Started

Started container operator

openstack-operators

openstack-operator-controller-init-6f44f7b99f-fplrp_3b940971-2cd6-43a6-bc5f-45e57783f707

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-6f44f7b99f-fplrp_3b940971-2cd6-43a6-bc5f-45e57783f707 became leader

openstack-operators

openstack-operator-controller-init-6f44f7b99f-fplrp_3b940971-2cd6-43a6-bc5f-45e57783f707

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-6f44f7b99f-fplrp_3b940971-2cd6-43a6-bc5f-45e57783f707 became leader

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-x44qr"

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-s275b"

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-x4wnq"

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-x44qr"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-x4wnq"

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-s275b"

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-p2wbw"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-p2wbw"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-jslks"

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-jslks"

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-tszfc"

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-swlgn"

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-tszfc"

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-swlgn"

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-6x7cv"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-6x7cv"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-648564c9fc to 1

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-7b6bfb6475

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-7b6bfb6475-j288g

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-7b6bfb6475

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-7b6bfb6475-j288g

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-7b6bfb6475 to 1

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-65b58d74b to 1

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-k5rcm

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-dc6dbbbd to 1

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-55b5ff4dbb to 1

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-mwr9x"

openstack-operators

replicaset-controller

test-operator-controller-manager-55b5ff4dbb

SuccessfulCreate

Created pod: test-operator-controller-manager-55b5ff4dbb-9cpc2

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-5fdb694969 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-5fdb694969

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-5fdb694969-bbqxt

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-9b9ff9f4d to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-9b9ff9f4d

SuccessfulCreate

Created pod: swift-operator-controller-manager-9b9ff9f4d-s8tqw

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-dc6dbbbd

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-78bc7f9bd9 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-78bc7f9bd9

SuccessfulCreate

Created pod: horizon-operator-controller-manager-78bc7f9bd9-rcxp2

openstack-operators

replicaset-controller

ovn-operator-controller-manager-75684d597f

SuccessfulCreate

Created pod: ovn-operator-controller-manager-75684d597f-ccbn4

openstack-operators

replicaset-controller

neutron-operator-controller-manager-54688575f

SuccessfulCreate

Created pod: neutron-operator-controller-manager-54688575f-vj8dt

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-75684d597f to 1

openstack-operators

replicaset-controller

barbican-operator-controller-manager-6db6876945

SuccessfulCreate

Created pod: barbican-operator-controller-manager-6db6876945-nlssq

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-54688575f to 1

openstack-operators

replicaset-controller

placement-operator-controller-manager-648564c9fc

SuccessfulCreate

Created pod: placement-operator-controller-manager-648564c9fc-l7256

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-7ksrz

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-6db6876945 to 1

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-75684d597f to 1

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-7c789f89c6 to 1

openstack-operators

replicaset-controller

ovn-operator-controller-manager-75684d597f

SuccessfulCreate

Created pod: ovn-operator-controller-manager-75684d597f-ccbn4

openstack-operators

replicaset-controller

placement-operator-controller-manager-648564c9fc

SuccessfulCreate

Created pod: placement-operator-controller-manager-648564c9fc-l7256

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-648564c9fc to 1

openstack-operators

replicaset-controller

barbican-operator-controller-manager-6db6876945

SuccessfulCreate

Created pod: barbican-operator-controller-manager-6db6876945-nlssq

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-6db6876945 to 1

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-cf99c678f to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-cf99c678f

SuccessfulCreate

Created pod: heat-operator-controller-manager-cf99c678f-qmcr7

openstack-operators

replicaset-controller

keystone-operator-controller-manager-7c789f89c6

SuccessfulCreate

Created pod: keystone-operator-controller-manager-7c789f89c6-zq79c

openstack-operators

replicaset-controller

watcher-operator-controller-manager-bccc79885

SuccessfulCreate

Created pod: watcher-operator-controller-manager-bccc79885-k5rcm

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-bccc79885 to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-65b58d74b

SuccessfulCreate

Created pod: infra-operator-controller-manager-65b58d74b-rrd9h

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-hjt7h

openstack-operators

replicaset-controller

nova-operator-controller-manager-74b6b5dc96

SuccessfulCreate

Created pod: nova-operator-controller-manager-74b6b5dc96-ndppt

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

replicaset-controller

cinder-operator-controller-manager-55d77d7b5c

SuccessfulCreate

Created pod: cinder-operator-controller-manager-55d77d7b5c-hjt7h

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-55d77d7b5c to 1

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-dc6dbbbd to 1

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-dc6dbbbd

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-545456dc4 to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-545456dc4

SuccessfulCreate

Created pod: ironic-operator-controller-manager-545456dc4-xth7w

openstack-operators

replicaset-controller

ironic-operator-controller-manager-545456dc4

SuccessfulCreate

Created pod: ironic-operator-controller-manager-545456dc4-xth7w

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-545456dc4 to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-9b9ff9f4d

SuccessfulCreate

Created pod: swift-operator-controller-manager-9b9ff9f4d-s8tqw

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-9b9ff9f4d to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-5d87c9d997

SuccessfulCreate

Created pod: designate-operator-controller-manager-5d87c9d997-jzt22

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-65b58d74b to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-65b58d74b

SuccessfulCreate

Created pod: infra-operator-controller-manager-65b58d74b-rrd9h

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-5d86c7ddb7 to 1

openstack-operators

replicaset-controller

octavia-operator-controller-manager-5d86c7ddb7

SuccessfulCreate

Created pod: octavia-operator-controller-manager-5d86c7ddb7-2plwq

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-55b5ff4dbb to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-55b5ff4dbb

SuccessfulCreate

Created pod: test-operator-controller-manager-55b5ff4dbb-9cpc2

openstack-operators

replicaset-controller

keystone-operator-controller-manager-7c789f89c6

SuccessfulCreate

Created pod: keystone-operator-controller-manager-7c789f89c6-zq79c

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-7c789f89c6 to 1

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-5d87c9d997 to 1

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-74b6b5dc96 to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-74b6b5dc96

SuccessfulCreate

Created pod: nova-operator-controller-manager-74b6b5dc96-ndppt

openstack-operators

replicaset-controller

designate-operator-controller-manager-5d87c9d997

SuccessfulCreate

Created pod: designate-operator-controller-manager-5d87c9d997-jzt22

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-5d87c9d997 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-5fdb694969

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-5fdb694969-bbqxt

openstack-operators

replicaset-controller

glance-operator-controller-manager-64db6967f8

SuccessfulCreate

Created pod: glance-operator-controller-manager-64db6967f8-mq69x

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-64db6967f8 to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-67d996989d

SuccessfulCreate

Created pod: manila-operator-controller-manager-67d996989d-7ksrz

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-67d996989d to 1

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-54688575f to 1

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-78bc7f9bd9 to 1

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-64db6967f8 to 1

openstack-operators

replicaset-controller

glance-operator-controller-manager-64db6967f8

SuccessfulCreate

Created pod: glance-operator-controller-manager-64db6967f8-mq69x

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-74b6b5dc96 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-78bc7f9bd9

SuccessfulCreate

Created pod: horizon-operator-controller-manager-78bc7f9bd9-rcxp2

openstack-operators

replicaset-controller

neutron-operator-controller-manager-54688575f

SuccessfulCreate

Created pod: neutron-operator-controller-manager-54688575f-vj8dt

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-5fdb694969 to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-cf99c678f

SuccessfulCreate

Created pod: heat-operator-controller-manager-cf99c678f-qmcr7

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-cf99c678f to 1

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-mwr9x"

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-5d86c7ddb7 to 1

openstack-operators

replicaset-controller

octavia-operator-controller-manager-5d86c7ddb7

SuccessfulCreate

Created pod: octavia-operator-controller-manager-5d86c7ddb7-2plwq

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-7b6bfb6475 to 1

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3"

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-r8626"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

multus

glance-operator-controller-manager-64db6967f8-mq69x

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051"

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053"

openstack-operators

multus

heat-operator-controller-manager-cf99c678f-qmcr7

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-r8626"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

barbican-operator-controller-manager-6db6876945-nlssq

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

openstack-operator-controller-manager-7dfcb4d64f

SuccessfulCreate

Created pod: openstack-operator-controller-manager-7dfcb4d64f-grrjr

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051"

openstack-operators

multus

glance-operator-controller-manager-64db6967f8-mq69x

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-7dfcb4d64f to 1

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-r8xj9

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-tsn7c"

openstack-operators

multus

barbican-operator-controller-manager-6db6876945-nlssq

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

multus

designate-operator-controller-manager-5d87c9d997-jzt22

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9ggw9"

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-7dfcb4d64f to 1

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

replicaset-controller

openstack-operator-controller-manager-7dfcb4d64f

SuccessfulCreate

Created pod: openstack-operator-controller-manager-7dfcb4d64f-grrjr

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-r8xj9

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

multus

designate-operator-controller-manager-5d87c9d997-jzt22

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9ggw9"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-hjt7h

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

multus

cinder-operator-controller-manager-55d77d7b5c-hjt7h

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053"

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-tsn7c"

openstack-operators

multus

heat-operator-controller-manager-cf99c678f-qmcr7

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01"

openstack-operators

multus

keystone-operator-controller-manager-7c789f89c6-zq79c

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c"

openstack-operators

multus

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01"

openstack-operators

multus

ironic-operator-controller-manager-545456dc4-xth7w

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214"

openstack-operators

multus

keystone-operator-controller-manager-7c789f89c6-zq79c

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120"

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214"

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

manila-operator-controller-manager-67d996989d-7ksrz

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-qghrx"

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-qghrx"

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505"

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

ironic-operator-controller-manager-545456dc4-xth7w

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3"

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

multus

manila-operator-controller-manager-67d996989d-7ksrz

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120"

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26"

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3"

openstack-operators

multus

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4"

openstack-operators

multus

neutron-operator-controller-manager-54688575f-vj8dt

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

mariadb-operator-controller-manager-7b6bfb6475-j288g

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

multus

mariadb-operator-controller-manager-7b6bfb6475-j288g

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

multus

neutron-operator-controller-manager-54688575f-vj8dt

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505"

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-qb8gj"

openstack-operators

multus

swift-operator-controller-manager-9b9ff9f4d-s8tqw

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

multus

placement-operator-controller-manager-648564c9fc-l7256

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

multus

placement-operator-controller-manager-648564c9fc-l7256

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-4pkws"

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-k5rcm

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

multus

watcher-operator-controller-manager-bccc79885-k5rcm

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

multus

test-operator-controller-manager-55b5ff4dbb-9cpc2

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-qb8gj"

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

ovn-operator-controller-manager-75684d597f-ccbn4

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-4pkws"

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

multus

swift-operator-controller-manager-9b9ff9f4d-s8tqw

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

test-operator-controller-manager-55b5ff4dbb-9cpc2

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

multus

telemetry-operator-controller-manager-5fdb694969-bbqxt

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-grrzv"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

octavia-operator-controller-manager-5d86c7ddb7-2plwq

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

ovn-operator-controller-manager-75684d597f-ccbn4

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-grrzv"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

multus

nova-operator-controller-manager-74b6b5dc96-ndppt

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

octavia-operator-controller-manager-5d86c7ddb7-2plwq

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

multus

nova-operator-controller-manager-74b6b5dc96-ndppt

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

multus

telemetry-operator-controller-manager-5fdb694969-bbqxt

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-65254"

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-65254"

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-wxmnx"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-wxmnx"

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7"

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-v6ft5"

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-5klng"

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-v6ft5"

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84"

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84"

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-5klng"

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6"

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968"

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-cdgpp"

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd"

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-86k7v"

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-cdgpp"

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968"

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e"

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-86k7v"

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x5)

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x5)

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-g7c4f"

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x5)

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-dlhfn"

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-dlhfn"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x5)

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found
(x5)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-g7c4f"

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-bdl7m"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x5)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-bdl7m"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 12.8s (12.8s including waiting). Image size: 191425982 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" in 12.8s (12.8s including waiting). Image size: 191425982 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" in 13.171s (13.171s including waiting). Image size: 191115738 bytes.

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" in 13.171s (13.171s including waiting). Image size: 191115738 bytes.

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" in 13.164s (13.164s including waiting). Image size: 195967461 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051" in 13.249s (13.249s including waiting). Image size: 192004030 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" in 13.482s (13.482s including waiting). Image size: 191606181 bytes.

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" in 12.582s (12.582s including waiting). Image size: 190376908 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" in 13.164s (13.164s including waiting). Image size: 195967461 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" in 12.582s (12.582s including waiting). Image size: 190376908 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051" in 13.249s (13.249s including waiting). Image size: 192004030 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" in 13.482s (13.482s including waiting). Image size: 191606181 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 13.846s (13.846s including waiting). Image size: 191246784 bytes.

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" in 13.758s (13.758s including waiting). Image size: 193036438 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" in 13.758s (13.758s including waiting). Image size: 193036438 bytes.

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" in 13.846s (13.846s including waiting). Image size: 191246784 bytes.

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" in 15.539s (15.539s including waiting). Image size: 191026634 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" in 15.539s (15.539s including waiting). Image size: 191026634 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 13.468s (13.468s including waiting). Image size: 190936524 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd" in 12.769s (12.769s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd" in 12.769s (12.769s including waiting). Image size: 193556939 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" in 13.466s (13.466s including waiting). Image size: 193630055 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6" in 13.965s (13.965s including waiting). Image size: 196200931 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Started

Started container manager

openstack-operators

multus

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

kubelet

octavia-operator-controller-manager-5d86c7ddb7-2plwq

Started

Started container manager

openstack-operators

multus

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" in 13.965s (13.965s including waiting). Image size: 192121261 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" in 13.466s (13.466s including waiting). Image size: 193630055 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Started

Started container manager

openstack-operators

kubelet

neutron-operator-controller-manager-54688575f-vj8dt

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505" in 17.405s (17.405s including waiting). Image size: 189416143 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" in 13.468s (13.468s including waiting). Image size: 190936524 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 16.728s (16.728s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" in 12.769s (12.769s including waiting). Image size: 188905402 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6" in 13.965s (13.965s including waiting). Image size: 196200931 bytes.

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" in 12.769s (12.769s including waiting). Image size: 188905402 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505" in 17.405s (17.405s including waiting). Image size: 189416143 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01" in 17.388s (17.388s including waiting). Image size: 191665088 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-67d996989d-7ksrz

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:e97889bd4dd6896d3272e1237f231c79ecc661730a8a757a527ec6c6716908e5"

openstack-operators

multus

infra-operator-controller-manager-65b58d74b-rrd9h

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01" in 17.388s (17.388s including waiting). Image size: 191665088 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Created

Created container: manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" in 15.175s (15.175s including waiting). Image size: 190114712 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:e97889bd4dd6896d3272e1237f231c79ecc661730a8a757a527ec6c6716908e5"

openstack-operators

multus

infra-operator-controller-manager-65b58d74b-rrd9h

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e" in 12.772s (12.772s including waiting). Image size: 190626280 bytes.

openstack-operators

designate-operator-controller-manager-5d87c9d997-jzt22_6a11025e-9464-4e99-b9e9-28559a97aa24

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-5d87c9d997-jzt22_6a11025e-9464-4e99-b9e9-28559a97aa24 became leader

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-78bc7f9bd9-rcxp2

Created

Created container: manager

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Started

Started container manager

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Created

Created container: manager

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e" in 12.772s (12.772s including waiting). Image size: 190626280 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-cf99c678f-qmcr7

Created

Created container: manager

openstack-operators

heat-operator-controller-manager-cf99c678f-qmcr7_250cf7b3-d38b-4115-bf99-9127b2e68633

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-cf99c678f-qmcr7_250cf7b3-d38b-4115-bf99-9127b2e68633 became leader

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-hjt7h_bfadba61-ca04-4094-9a11-94a1bfd0b43e

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-hjt7h_bfadba61-ca04-4094-9a11-94a1bfd0b43e became leader

openstack-operators

designate-operator-controller-manager-5d87c9d997-jzt22_6a11025e-9464-4e99-b9e9-28559a97aa24

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-5d87c9d997-jzt22_6a11025e-9464-4e99-b9e9-28559a97aa24 became leader

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Started

Started container manager

openstack-operators

kubelet

designate-operator-controller-manager-5d87c9d997-jzt22

Created

Created container: manager

openstack-operators

manila-operator-controller-manager-67d996989d-7ksrz_0b61a47f-56f9-402f-a668-eebf8d6c3f97

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-7ksrz_0b61a47f-56f9-402f-a668-eebf8d6c3f97 became leader

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" in 15.175s (15.175s including waiting). Image size: 190114712 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-55d77d7b5c-hjt7h

Created

Created container: manager

openstack-operators

heat-operator-controller-manager-cf99c678f-qmcr7_250cf7b3-d38b-4115-bf99-9127b2e68633

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-cf99c678f-qmcr7_250cf7b3-d38b-4115-bf99-9127b2e68633 became leader

openstack-operators

cinder-operator-controller-manager-55d77d7b5c-hjt7h_bfadba61-ca04-4094-9a11-94a1bfd0b43e

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-55d77d7b5c-hjt7h_bfadba61-ca04-4094-9a11-94a1bfd0b43e became leader

openstack-operators

horizon-operator-controller-manager-78bc7f9bd9-rcxp2_cfb6c65f-7a71-469b-a357-220d6561d9e5

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-78bc7f9bd9-rcxp2_cfb6c65f-7a71-469b-a357-220d6561d9e5 became leader

openstack-operators

manila-operator-controller-manager-67d996989d-7ksrz_0b61a47f-56f9-402f-a668-eebf8d6c3f97

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-67d996989d-7ksrz_0b61a47f-56f9-402f-a668-eebf8d6c3f97 became leader

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

horizon-operator-controller-manager-78bc7f9bd9-rcxp2_cfb6c65f-7a71-469b-a357-220d6561d9e5

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-78bc7f9bd9-rcxp2_cfb6c65f-7a71-469b-a357-220d6561d9e5 became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 16.728s (16.728s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" in 13.965s (13.965s including waiting). Image size: 192121261 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Created

Created container: manager

openstack-operators

keystone-operator-controller-manager-7c789f89c6-zq79c_7fd25bb2-9441-4124-bad0-b8c2a79b276e

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-7c789f89c6-zq79c_7fd25bb2-9441-4124-bad0-b8c2a79b276e became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Created

Created container: operator

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Started

Started container manager

openstack-operators

watcher-operator-controller-manager-bccc79885-k5rcm_dcdef7a5-1ae0-487a-912d-f129f525bdb3

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-k5rcm_dcdef7a5-1ae0-487a-912d-f129f525bdb3 became leader

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Created

Created container: manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Created

Created container: manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-75684d597f-ccbn4

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-648564c9fc-l7256

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Started

Started container operator

openstack-operators

test-operator-controller-manager-55b5ff4dbb-9cpc2_fbf400a4-95a2-4970-be6b-f2dc135205e6

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-55b5ff4dbb-9cpc2_fbf400a4-95a2-4970-be6b-f2dc135205e6 became leader

openstack-operators

placement-operator-controller-manager-648564c9fc-l7256_aa1795e9-3c4b-4f9c-8c7b-b6af66f63c20

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-648564c9fc-l7256_aa1795e9-3c4b-4f9c-8c7b-b6af66f63c20 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Started

Started container manager

openstack-operators

mariadb-operator-controller-manager-7b6bfb6475-j288g_6ca04a57-83a4-42ae-b21d-1762a03ab4b5

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-7b6bfb6475-j288g_6ca04a57-83a4-42ae-b21d-1762a03ab4b5 became leader

openstack-operators

watcher-operator-controller-manager-bccc79885-k5rcm_dcdef7a5-1ae0-487a-912d-f129f525bdb3

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-bccc79885-k5rcm_dcdef7a5-1ae0-487a-912d-f129f525bdb3 became leader

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Started

Started container manager

openstack-operators

barbican-operator-controller-manager-6db6876945-nlssq_5ccad5a1-af43-4561-bedb-8eaf32dbd9c6

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-6db6876945-nlssq_5ccad5a1-af43-4561-bedb-8eaf32dbd9c6 became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Created

Created container: operator

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-r8xj9

Started

Started container operator

openstack-operators

ovn-operator-controller-manager-75684d597f-ccbn4_e384eb98-2ad4-4c44-baa1-1670f23678e3

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-75684d597f-ccbn4_e384eb98-2ad4-4c44-baa1-1670f23678e3 became leader

openstack-operators

neutron-operator-controller-manager-54688575f-vj8dt_22f4583c-d007-49a9-abc6-585e024a643c

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-54688575f-vj8dt_22f4583c-d007-49a9-abc6-585e024a643c became leader

openstack-operators

octavia-operator-controller-manager-5d86c7ddb7-2plwq_4f3fba49-95c1-48fd-afe4-ac8c29564a98

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-5d86c7ddb7-2plwq_4f3fba49-95c1-48fd-afe4-ac8c29564a98 became leader

openstack-operators

keystone-operator-controller-manager-7c789f89c6-zq79c_7fd25bb2-9441-4124-bad0-b8c2a79b276e

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-7c789f89c6-zq79c_7fd25bb2-9441-4124-bad0-b8c2a79b276e became leader

openstack-operators

test-operator-controller-manager-55b5ff4dbb-9cpc2_fbf400a4-95a2-4970-be6b-f2dc135205e6

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-55b5ff4dbb-9cpc2_fbf400a4-95a2-4970-be6b-f2dc135205e6 became leader

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Started

Started container manager

openstack-operators

placement-operator-controller-manager-648564c9fc-l7256_aa1795e9-3c4b-4f9c-8c7b-b6af66f63c20

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-648564c9fc-l7256_aa1795e9-3c4b-4f9c-8c7b-b6af66f63c20 became leader

openstack-operators

glance-operator-controller-manager-64db6967f8-mq69x_46dc90e3-4564-4a9a-851f-0ad20ec2d5cb

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-64db6967f8-mq69x_46dc90e3-4564-4a9a-851f-0ad20ec2d5cb became leader

openstack-operators

mariadb-operator-controller-manager-7b6bfb6475-j288g_6ca04a57-83a4-42ae-b21d-1762a03ab4b5

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-7b6bfb6475-j288g_6ca04a57-83a4-42ae-b21d-1762a03ab4b5 became leader

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-9b9ff9f4d-s8tqw

Started

Started container manager

openstack-operators

barbican-operator-controller-manager-6db6876945-nlssq_5ccad5a1-af43-4561-bedb-8eaf32dbd9c6

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-6db6876945-nlssq_5ccad5a1-af43-4561-bedb-8eaf32dbd9c6 became leader

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-74b6b5dc96-ndppt

Created

Created container: manager

openstack-operators

ovn-operator-controller-manager-75684d597f-ccbn4_e384eb98-2ad4-4c44-baa1-1670f23678e3

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-75684d597f-ccbn4_e384eb98-2ad4-4c44-baa1-1670f23678e3 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-5fdb694969-bbqxt

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-55b5ff4dbb-9cpc2

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7b6bfb6475-j288g

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Created

Created container: manager

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Created

Created container: manager

openstack-operators

neutron-operator-controller-manager-54688575f-vj8dt_22f4583c-d007-49a9-abc6-585e024a643c

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-54688575f-vj8dt_22f4583c-d007-49a9-abc6-585e024a643c became leader

openstack-operators

nova-operator-controller-manager-74b6b5dc96-ndppt_e4303544-9da7-458a-afa3-ed39f32e708a

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-74b6b5dc96-ndppt_e4303544-9da7-458a-afa3-ed39f32e708a became leader

openstack-operators

ironic-operator-controller-manager-545456dc4-xth7w_3e8adca7-901c-4fff-92bb-23f8e98d0ac1

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-545456dc4-xth7w_3e8adca7-901c-4fff-92bb-23f8e98d0ac1 became leader

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-7c789f89c6-zq79c

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Created

Created container: manager

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-545456dc4-xth7w

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-bccc79885-k5rcm

Started

Started container manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Started

Started container manager

openstack-operators

octavia-operator-controller-manager-5d86c7ddb7-2plwq_4f3fba49-95c1-48fd-afe4-ac8c29564a98

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-5d86c7ddb7-2plwq_4f3fba49-95c1-48fd-afe4-ac8c29564a98 became leader

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-64db6967f8-mq69x

Started

Started container manager

openstack-operators

kubelet

barbican-operator-controller-manager-6db6876945-nlssq

Started

Started container manager

openstack-operators

glance-operator-controller-manager-64db6967f8-mq69x_46dc90e3-4564-4a9a-851f-0ad20ec2d5cb

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-64db6967f8-mq69x_46dc90e3-4564-4a9a-851f-0ad20ec2d5cb became leader

openstack-operators

nova-operator-controller-manager-74b6b5dc96-ndppt_e4303544-9da7-458a-afa3-ed39f32e708a

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-74b6b5dc96-ndppt_e4303544-9da7-458a-afa3-ed39f32e708a became leader

openstack-operators

ironic-operator-controller-manager-545456dc4-xth7w_3e8adca7-901c-4fff-92bb-23f8e98d0ac1

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-545456dc4-xth7w_3e8adca7-901c-4fff-92bb-23f8e98d0ac1 became leader

openstack-operators

telemetry-operator-controller-manager-5fdb694969-bbqxt_21f70ff4-c71c-418d-89c2-d6294ccfea67

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-5fdb694969-bbqxt_21f70ff4-c71c-418d-89c2-d6294ccfea67 became leader

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-r8xj9_33839361-fd05-406f-be18-142ee6981a27

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-r8xj9_33839361-fd05-406f-be18-142ee6981a27 became leader

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-r8xj9_33839361-fd05-406f-be18-142ee6981a27

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-r8xj9_33839361-fd05-406f-be18-142ee6981a27 became leader

openstack-operators

swift-operator-controller-manager-9b9ff9f4d-s8tqw_1dce9922-a9e3-4f7c-adb8-d7e8d545bbf3

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-9b9ff9f4d-s8tqw_1dce9922-a9e3-4f7c-adb8-d7e8d545bbf3 became leader

openstack-operators

swift-operator-controller-manager-9b9ff9f4d-s8tqw_1dce9922-a9e3-4f7c-adb8-d7e8d545bbf3

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-9b9ff9f4d-s8tqw_1dce9922-a9e3-4f7c-adb8-d7e8d545bbf3 became leader

openstack-operators

telemetry-operator-controller-manager-5fdb694969-bbqxt_21f70ff4-c71c-418d-89c2-d6294ccfea67

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-5fdb694969-bbqxt_21f70ff4-c71c-418d-89c2-d6294ccfea67 became leader

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 3.99s (3.99s including waiting). Image size: 190527593 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 3.99s (3.99s including waiting). Image size: 190527593 bytes.

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:e97889bd4dd6896d3272e1237f231c79ecc661730a8a757a527ec6c6716908e5" in 4.134s (4.134s including waiting). Image size: 192851379 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:e97889bd4dd6896d3272e1237f231c79ecc661730a8a757a527ec6c6716908e5" in 4.134s (4.134s including waiting). Image size: 192851379 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Started

Started container manager

openstack-operators

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5_8c7b0d3a-8889-4ca8-b07f-4150828a87b2

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5_8c7b0d3a-8889-4ca8-b07f-4150828a87b2 became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5

Created

Created container: manager

openstack-operators

infra-operator-controller-manager-65b58d74b-rrd9h_56ef0656-54b0-4713-ba14-de571c6db46e

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-65b58d74b-rrd9h_56ef0656-54b0-4713-ba14-de571c6db46e became leader

openstack-operators

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5_8c7b0d3a-8889-4ca8-b07f-4150828a87b2

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-dc6dbbbd-xznm5_8c7b0d3a-8889-4ca8-b07f-4150828a87b2 became leader

openstack-operators

infra-operator-controller-manager-65b58d74b-rrd9h_56ef0656-54b0-4713-ba14-de571c6db46e

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-65b58d74b-rrd9h_56ef0656-54b0-4713-ba14-de571c6db46e became leader

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-65b58d74b-rrd9h

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:9e10a98495ce6a05e00b09f74eeae9fdac20e29ca647bc46453a9ae72f4fa498" already present on machine

openstack-operators

multus

openstack-operator-controller-manager-7dfcb4d64f-grrjr

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Started

Started container manager

openstack-operators

multus

openstack-operator-controller-manager-7dfcb4d64f-grrjr

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-7dfcb4d64f-grrjr

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:9e10a98495ce6a05e00b09f74eeae9fdac20e29ca647bc46453a9ae72f4fa498" already present on machine

openstack-operators

openstack-operator-controller-manager-7dfcb4d64f-grrjr_89932e5c-13d3-4c10-be5e-367d38783472

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-7dfcb4d64f-grrjr_89932e5c-13d3-4c10-be5e-367d38783472 became leader

openstack-operators

openstack-operator-controller-manager-7dfcb4d64f-grrjr_89932e5c-13d3-4c10-be5e-367d38783472

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-7dfcb4d64f-grrjr_89932e5c-13d3-4c10-be5e-367d38783472 became leader

openstack

cert-manager-certificates-trigger

rootca-public

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-public

ErrInitIssuer

Error initializing issuer: secrets "rootca-public" not found
(x2)

openstack

cert-manager-issuers

rootca-public

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-public" not found

openstack

cert-manager-certificaterequests-issuer-vault

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

rootca-public

Issuing

The certificate has been successfully issued
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-internal" not found
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrInitIssuer

Error initializing issuer: secrets "rootca-internal" not found

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

rootca-internal

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-public-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

rootca-public

Generated

Stored new private key in temporary Secret resource "rootca-public-88gsl"

openstack

cert-manager-certificates-request-manager

rootca-public

Requested

Created new CertificateRequest resource "rootca-public-1"

openstack

cert-manager-certificates-issuing

rootca-internal

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

rootca-internal

Requested

Created new CertificateRequest resource "rootca-internal-1"

openstack

cert-manager-certificaterequests-issuer-acme

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-libvirt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-internal

Generated

Stored new private key in temporary Secret resource "rootca-internal-pdbcx"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-libvirt

Generated

Stored new private key in temporary Secret resource "rootca-libvirt-s4s4g"

openstack

cert-manager-certificaterequests-issuer-ca

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-internal-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

rootca-libvirt

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificates-request-manager

rootca-libvirt

Requested

Created new CertificateRequest resource "rootca-libvirt-1"
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrInitIssuer

Error initializing issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificates-trigger

rootca-ovn

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrInitIssuer

Error initializing issuer: secrets "rootca-ovn" not found
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificates-issuing

rootca-libvirt

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-ovn-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

rootca-ovn

Generated

Stored new private key in temporary Secret resource "rootca-ovn-z2pp6"

openstack

cert-manager-certificates-request-manager

rootca-ovn

Requested

Created new CertificateRequest resource "rootca-ovn-1"

openstack

replicaset-controller

dnsmasq-dns-667b9d65dc

SuccessfulCreate

Created pod: dnsmasq-dns-667b9d65dc-vfb6d

openstack

replicaset-controller

dnsmasq-dns-69fd45f56f

SuccessfulCreate

Created pod: dnsmasq-dns-69fd45f56f-msd9g

openstack

cert-manager-certificates-key-manager

rabbitmq-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-svc-zdm5b"

openstack

cert-manager-certificates-issuing

rootca-ovn

Issuing

The certificate has been successfully issued
(x3)

openstack

cert-manager-issuers

rootca-public

KeyPairVerified

Signing CA verified

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-69fd45f56f to 1
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

metallb-controller

dnsmasq-dns

IPAllocated

Assigned IP ["192.168.122.80"]

openstack

cert-manager-certificates-trigger

rabbitmq-svc

Issuing

Issuing certificate as Secret does not exist

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-667b9d65dc to 1

openstack

cert-manager-certificates-trigger

rabbitmq-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-69fd45f56f-msd9g

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514"

openstack

multus

dnsmasq-dns-667b9d65dc-vfb6d

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-approver

rabbitmq-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rabbitmq-svc

Requested

Created new CertificateRequest resource "rabbitmq-svc-1"

openstack

multus

dnsmasq-dns-69fd45f56f-msd9g

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes
(x3)

openstack

cert-manager-issuers

rootca-internal

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificates-key-manager

rabbitmq-cell1-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-jk2wh"

openstack

kubelet

dnsmasq-dns-667b9d65dc-vfb6d

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rabbitmq-cell1-svc

Requested

Created new CertificateRequest resource "rabbitmq-cell1-svc-1"

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rabbitmq-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

rabbitmq-cell1-svc

Issuing

The certificate has been successfully issued
(x3)

openstack

cert-manager-issuers

rootca-libvirt

KeyPairVerified

Signing CA verified

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful

openstack

replicaset-controller

dnsmasq-dns-69fd45f56f

SuccessfulDelete

Deleted pod: dnsmasq-dns-69fd45f56f-msd9g

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-nodes of Type *v1.Service

openstack

cert-manager-certificates-issuing

rabbitmq-svc

Issuing

The certificate has been successfully issued
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-erlang-cookie of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-default-user of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-plugins-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.RoleBinding
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7466868675 to 1 from 0

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.RoleBinding

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-default-user of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

rabbitmq-cell1

IPAllocated

Assigned IP ["172.17.0.86"]

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1 of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-nodes of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-667b9d65dc to 0 from 1

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

replicaset-controller

dnsmasq-dns-667b9d65dc

SuccessfulDelete

Deleted pod: dnsmasq-dns-667b9d65dc-vfb6d

openstack

metallb-controller

rabbitmq

IPAllocated

Assigned IP ["172.17.0.85"]

openstack

cert-manager-certificates-request-manager

galera-openstack-svc

Requested

Created new CertificateRequest resource "galera-openstack-svc-1"

openstack

cert-manager-certificates-key-manager

galera-openstack-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-svc-lsgtv"

openstack

cert-manager-certificates-trigger

galera-openstack-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

galera-openstack-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

persistence-rabbitmq-cell1-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0"

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-76ff7d945 to 1 from 0

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-69fd45f56f to 0 from 1

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful
(x3)

openstack

cert-manager-issuers

rootca-ovn

KeyPairVerified

Signing CA verified

openstack

replicaset-controller

dnsmasq-dns-76ff7d945

SuccessfulCreate

Created pod: dnsmasq-dns-76ff7d945-qtbgb

openstack

replicaset-controller

dnsmasq-dns-7466868675

SuccessfulCreate

Created pod: dnsmasq-dns-7466868675-m4658

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
(x2)

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

multus

dnsmasq-dns-76ff7d945-qtbgb

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514"

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514"

openstack

multus

dnsmasq-dns-7466868675-m4658

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Pod openstack-galera-0 in StatefulSet openstack-galera successful

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success

openstack

cert-manager-certificates-trigger

galera-openstack-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

galera-openstack-cell1-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-rq8hp"

openstack

cert-manager-certificates-issuing

galera-openstack-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

galera-openstack-cell1-svc

Requested

Created new CertificateRequest resource "galera-openstack-cell1-svc-1"

openstack

cert-manager-certificates-issuing

galera-openstack-cell1-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

galera-openstack-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

memcached-svc

Requested

Created new CertificateRequest resource "memcached-svc-1"

openstack

cert-manager-certificates-key-manager

memcached-svc

Generated

Stored new private key in temporary Secret resource "memcached-svc-hsn8z"

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificaterequests-issuer-venafi

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

memcached-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

memcached-svc

Issuing

Issuing certificate as Secret does not exist

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

persistence-rabbitmq-cell1-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-06462067-2ded-43d7-a02a-43211f51676a

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

persistence-rabbitmq-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0"

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-key-manager

ovn-metrics

Generated

Stored new private key in temporary Secret resource "ovn-metrics-vbdm8"

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

persistence-rabbitmq-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-b3f8d3b9-5cf0-4c92-812a-cc03c36d27f4

openstack

cert-manager-certificates-trigger

ovn-metrics

Issuing

Issuing certificate as Secret does not exist

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

mysql-db-openstack-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0"

openstack

statefulset-controller

memcached

SuccessfulCreate

create Pod memcached-0 in StatefulSet memcached successful

openstack

cert-manager-certificates-issuing

memcached-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ovn-metrics

Requested

Created new CertificateRequest resource "ovn-metrics-1"

openstack

cert-manager-certificaterequests-issuer-venafi

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ovn-metrics

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

mysql-db-openstack-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-40399d5b-ef3b-4708-abec-33eea3352bc1

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

mysql-db-openstack-cell1-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0"

openstack

cert-manager-certificaterequests-issuer-acme

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovn-metrics-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-trigger

ovndbcluster-nb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

ovnnorthd-ovndbs

Generated

Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-hms2d"

openstack

cert-manager-certificates-trigger

ovncontroller-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

neutron-ovndbs

Generated

Stored new private key in temporary Secret resource "neutron-ovndbs-hww74"

openstack

cert-manager-certificates-trigger

ovnnorthd-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ovndbcluster-nb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-8hlzs"

openstack

cert-manager-certificates-trigger

neutron-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

mysql-db-openstack-cell1-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-d4c85bc6-4ba4-41f8-ac2f-4164794b47c9

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

neutron-ovndbs

Requested

Created new CertificateRequest resource "neutron-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-venafi

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ovncontroller-ovndbs

Generated

Stored new private key in temporary Secret resource "ovncontroller-ovndbs-8q9pn"

openstack

cert-manager-certificates-request-manager

ovncontroller-ovndbs

Requested

Created new CertificateRequest resource "ovncontroller-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovndbcluster-nb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-acme

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovndbcluster-nb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovnnorthd-ovndbs

Requested

Created new CertificateRequest resource "ovnnorthd-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ovnnorthd-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ovncontroller-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-approver

neutron-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

ovndbcluster-nb-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

ovncontroller-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-trigger

ovndbcluster-sb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

neutron-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

ovnnorthd-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

ovndbcluster-sb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-key-manager

ovndbcluster-sb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-998mv"

openstack

cert-manager-certificates-request-manager

ovndbcluster-sb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1"

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

daemonset-controller

ovn-controller

SuccessfulCreate

Created pod: ovn-controller-wptpb

openstack

daemonset-controller

ovn-controller-ovs

SuccessfulCreate

Created pod: ovn-controller-ovs-csxfx

openstack

cert-manager-certificates-issuing

ovndbcluster-sb-ovndbs

Issuing

The certificate has been successfully issued

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0"

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-acc72ec2-5af2-41cb-8898-db335c63aa17

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-efd6847b-7fd2-49ab-82e8-7e90acb0cc87

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Started

Started container init

openstack

kubelet

dnsmasq-dns-69fd45f56f-msd9g

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" in 21.229s (21.229s including waiting). Image size: 679322452 bytes.

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" in 18.089s (18.089s including waiting). Image size: 679322452 bytes.

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Started

Started container init

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Created

Created container: init

openstack

kubelet

dnsmasq-dns-667b9d65dc-vfb6d

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" in 21.105s (21.105s including waiting). Image size: 679322452 bytes.

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" in 18.408s (18.408s including waiting). Image size: 679322452 bytes.

openstack

kubelet

dnsmasq-dns-667b9d65dc-vfb6d

Created

Created container: init

openstack

kubelet

dnsmasq-dns-667b9d65dc-vfb6d

Started

Started container init

openstack

kubelet

dnsmasq-dns-69fd45f56f-msd9g

Started

Started container init

openstack

kubelet

dnsmasq-dns-69fd45f56f-msd9g

Created

Created container: init

openstack

multus

ovn-controller-ovs-csxfx

AddedInterface

Add datacentre [] from openstack/datacentre

openstack

kubelet

rabbitmq-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447"

openstack

multus

ovn-controller-ovs-csxfx

AddedInterface

Add eth0 [10.128.0.173/23] from ovn-kubernetes

openstack

multus

ovn-controller-ovs-csxfx

AddedInterface

Add ironic [172.20.1.30/24] from openstack/ironic

openstack

multus

openstack-galera-0

AddedInterface

Add eth0 [10.128.0.170/23] from ovn-kubernetes

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:812985ffe0b6b538838297b37df5bd7a8be92b372465f939f2588b7f54690e63"

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add internalapi [172.17.0.31/24] from openstack/internalapi

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add eth0 [10.128.0.174/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Started

Started container dnsmasq-dns

openstack

kubelet

openstack-cell1-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa"

openstack

multus

openstack-cell1-galera-0

AddedInterface

Add eth0 [10.128.0.171/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Created

Created container: dnsmasq-dns

openstack

multus

rabbitmq-cell1-server-0

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack

kubelet

rabbitmq-cell1-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447"

openstack

multus

ovn-controller-wptpb

AddedInterface

Add eth0 [10.128.0.172/23] from ovn-kubernetes

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add eth0 [10.128.0.175/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-wptpb

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe"

openstack

kubelet

openstack-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa"

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add internalapi [172.17.0.30/24] from openstack/internalapi

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:2f232911af0323066a07ba977e4fe2b28c1a3e78fe7032365517c1297dc71d11"

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Started

Started container dnsmasq-dns

openstack

kubelet

memcached-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:f434d78bf81ef3a2087c435011ff995697fc8e53555ba27c2b8d2425e38bda44"

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

memcached-0

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Created

Created container: dnsmasq-dns

openstack

multus

rabbitmq-server-0

AddedInterface

Add eth0 [10.128.0.169/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-7466868675

SuccessfulDelete

Deleted pod: dnsmasq-dns-7466868675-m4658

openstack

replicaset-controller

dnsmasq-dns-7f654db4c5

SuccessfulCreate

Created pod: dnsmasq-dns-7f654db4c5-5b5lg

openstack

daemonset-controller

ovn-controller-metrics

SuccessfulCreate

Created pod: ovn-controller-metrics-h69l5

openstack

multus

ovn-controller-ovs-csxfx

AddedInterface

Add tenant [172.19.0.30/24] from openstack/tenant

openstack

kubelet

ovn-controller-ovs-csxfx

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:f3885607c0a32da6458909fc04dc0b2919abfcff333975b37f415ef25631a932"

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7f654db4c5 to 1 from 0

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-7466868675 to 0 from 1

openstack

replicaset-controller

dnsmasq-dns-58dc6c9559

SuccessfulCreate

Created pod: dnsmasq-dns-58dc6c9559-pt84w

openstack

kubelet

ovn-controller-metrics-h69l5

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c"

openstack

multus

ovn-controller-metrics-h69l5

AddedInterface

Add eth0 [10.128.0.176/23] from ovn-kubernetes

openstack

multus

dnsmasq-dns-7f654db4c5-5b5lg

AddedInterface

Add eth0 [10.128.0.177/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-76ff7d945

SuccessfulDelete

Deleted pod: dnsmasq-dns-76ff7d945-qtbgb

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-76ff7d945 to 0 from 1

openstack

kubelet

dnsmasq-dns-58dc6c9559-pt84w

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-7f654db4c5-5b5lg

Started

Started container init

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Killing

Stopping container dnsmasq-dns

openstack

multus

dnsmasq-dns-58dc6c9559-pt84w

AddedInterface

Add eth0 [10.128.0.178/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7f654db4c5-5b5lg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-58dc6c9559-pt84w

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7f654db4c5-5b5lg

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7f654db4c5-5b5lg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-58dc6c9559-pt84w

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-58dc6c9559-pt84w

Started

Started container init

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Killing

Stopping container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-58dc6c9559-pt84w

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7f654db4c5-5b5lg

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-58dc6c9559-pt84w

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7f654db4c5-5b5lg

Started

Started container dnsmasq-dns
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq of Type *v1.Service
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-server of Type *v1.StatefulSet

openstack

replicaset-controller

dnsmasq-dns-7f654db4c5

SuccessfulDelete

Deleted pod: dnsmasq-dns-7f654db4c5-5b5lg

openstack

kubelet

dnsmasq-dns-7f654db4c5-5b5lg

Killing

Stopping container dnsmasq-dns

openstack

kubelet

openstack-cell1-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" in 14.529s (14.529s including waiting). Image size: 429822276 bytes.

openstack

kubelet

rabbitmq-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" in 14.128s (14.128s including waiting). Image size: 304861257 bytes.

openstack

kubelet

ovn-controller-ovs-csxfx

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:f3885607c0a32da6458909fc04dc0b2919abfcff333975b37f415ef25631a932" in 13.419s (13.419s including waiting). Image size: 324641297 bytes.

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:812985ffe0b6b538838297b37df5bd7a8be92b372465f939f2588b7f54690e63" in 13.927s (13.927s including waiting). Image size: 347188517 bytes.

openstack

kubelet

memcached-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:f434d78bf81ef3a2087c435011ff995697fc8e53555ba27c2b8d2425e38bda44" in 14.62s (14.62s including waiting). Image size: 277800650 bytes.

openstack

kubelet

openstack-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" in 14.484s (14.484s including waiting). Image size: 429822276 bytes.

openstack

kubelet

ovn-controller-metrics-h69l5

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 12.45s (12.45s including waiting). Image size: 165206333 bytes.

openstack

kubelet

ovn-controller-ovs-csxfx

Created

Created container: ovsdb-server-init

openstack

kubelet

ovn-controller-ovs-csxfx

Started

Started container ovsdb-server-init

openstack

kubelet

ovn-controller-wptpb

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe" in 14.574s (14.574s including waiting). Image size: 347014089 bytes.

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:2f232911af0323066a07ba977e4fe2b28c1a3e78fe7032365517c1297dc71d11" in 14.468s (14.468s including waiting). Image size: 347188005 bytes.

openstack

metallb-controller

dnsmasq-dns-ironic

IPAllocated

Assigned IP ["172.20.1.80"]

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Started

Started container ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine
(x2)

openstack

kubelet

dnsmasq-dns-7466868675-m4658

Unhealthy

Readiness probe failed: dial tcp 10.128.0.165:5353: i/o timeout
(x2)

openstack

metallb-controller

dnsmasq-dns-ironic

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" in 14.58s (14.58s including waiting). Image size: 304861257 bytes.
(x2)

openstack

metallb-controller

dnsmasq-dns-ironic

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

dnsmasq-dns-ironic

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

kubelet

ovsdbserver-sb-0

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine

openstack

kubelet

ovsdbserver-sb-0

Started

Started container ovsdbserver-sb

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: ovsdbserver-sb

openstack

kubelet

memcached-0

Started

Started container memcached

openstack

kubelet

ovsdbserver-sb-0

Started

Started container openstack-network-exporter

openstack

kubelet

openstack-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

openstack-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

ovn-controller-metrics-h69l5

Created

Created container: openstack-network-exporter

openstack

kubelet

ovn-controller-metrics-h69l5

Started

Started container openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: openstack-network-exporter
(x2)

openstack

kubelet

dnsmasq-dns-76ff7d945-qtbgb

Unhealthy

Readiness probe failed: dial tcp 10.128.0.166:5353: i/o timeout

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: setup-container

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container setup-container

openstack

kubelet

openstack-cell1-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

memcached-0

Created

Created container: memcached

openstack

kubelet

ovn-controller-wptpb

Started

Started container ovn-controller

openstack

kubelet

ovn-controller-wptpb

Created

Created container: ovn-controller

openstack

kubelet

ovn-controller-ovs-csxfx

Created

Created container: ovsdb-server

openstack

kubelet

rabbitmq-server-0

Created

Created container: setup-container

openstack

kubelet

ovn-controller-ovs-csxfx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:f3885607c0a32da6458909fc04dc0b2919abfcff333975b37f415ef25631a932" already present on machine

openstack

kubelet

ovn-controller-ovs-csxfx

Started

Started container ovsdb-server

openstack

kubelet

rabbitmq-server-0

Started

Started container setup-container

openstack

kubelet

ovn-controller-ovs-csxfx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:f3885607c0a32da6458909fc04dc0b2919abfcff333975b37f415ef25631a932" already present on machine

openstack

kubelet

ovsdbserver-nb-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: openstack-network-exporter

openstack

kubelet

ovn-controller-ovs-csxfx

Started

Started container ovs-vswitchd

openstack

kubelet

ovn-controller-ovs-csxfx

Created

Created container: ovs-vswitchd

openstack

kubelet

openstack-cell1-galera-0

Started

Started container galera

openstack

kubelet

openstack-cell1-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

openstack-galera-0

Started

Started container galera

openstack

statefulset-controller

ovn-northd

SuccessfulCreate

create Pod ovn-northd-0 in StatefulSet ovn-northd successful

openstack

kubelet

openstack-galera-0

Created

Created container: galera

openstack

kubelet

openstack-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: galera

openstack

kubelet

ovn-northd-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:48da08c4e3dd579112045aaf6840563e42462346fedbcaaef7258caeed2406f0"

openstack

multus

ovn-northd-0

AddedInterface

Add eth0 [10.128.0.179/23] from ovn-kubernetes

openstack

kubelet

ovn-northd-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovn-northd-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:48da08c4e3dd579112045aaf6840563e42462346fedbcaaef7258caeed2406f0" in 1.096s (1.096s including waiting). Image size: 347185612 bytes.

openstack

kubelet

ovn-northd-0

Created

Created container: ovn-northd

openstack

kubelet

ovn-northd-0

Started

Started container ovn-northd

openstack

kubelet

ovn-northd-0

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine

openstack

kubelet

ovn-northd-0

Created

Created container: openstack-network-exporter
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-server of Type *v1.StatefulSet
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1 of Type *v1.Service
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

persistentvolume-controller

swift-swift-storage-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

swift-swift-storage-0

Provisioning

External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0"

openstack

persistentvolume-controller

swift-swift-storage-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Pod swift-storage-0 in StatefulSet swift-storage successful

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success

openstack

cert-manager-certificates-key-manager

swift-internal-svc

Generated

Stored new private key in temporary Secret resource "swift-internal-svc-4vmpz"

openstack

cert-manager-certificates-request-manager

swift-internal-svc

Requested

Created new CertificateRequest resource "swift-internal-svc-1"

openstack

cert-manager-certificates-issuing

swift-internal-svc

Issuing

The certificate has been successfully issued

openstack

metallb-controller

swift-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

replicaset-controller

dnsmasq-dns-d6c6c44c5

SuccessfulCreate

Created pod: dnsmasq-dns-d6c6c44c5-7fbfp
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificates-trigger

swift-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

swift-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-d6c6c44c5-7fbfp

Started

Started container init

openstack

cert-manager-certificaterequests-approver

swift-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

swift-public-svc

Generated

Stored new private key in temporary Secret resource "swift-public-svc-ggjjd"

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-d6c6c44c5-7fbfp

AddedInterface

Add eth0 [10.128.0.180/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-d6c6c44c5-7fbfp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-d6c6c44c5-7fbfp

Created

Created container: init

openstack

cert-manager-certificates-request-manager

swift-public-svc

Requested

Created new CertificateRequest resource "swift-public-svc-1"

openstack

cert-manager-certificates-issuing

swift-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-d6c6c44c5-7fbfp

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-d6c6c44c5-7fbfp

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

swift-swift-storage-0

ProvisioningSucceeded

Successfully provisioned volume pvc-c7378389-6847-472c-b514-9e1417dd82a9

openstack

cert-manager-certificates-trigger

swift-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-d6c6c44c5-7fbfp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificaterequests-approver

swift-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

swift-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

swift-public-route

Requested

Created new CertificateRequest resource "swift-public-route-1"

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

swift-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

swift-public-route

Generated

Stored new private key in temporary Secret resource "swift-public-route-cpb7b"

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-gshbr

openstack

multus

root-account-create-update-gshbr

AddedInterface

Add eth0 [10.128.0.182/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-gshbr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

job-controller

swift-ring-rebalance

SuccessfulCreate

Created pod: swift-ring-rebalance-gr69d

openstack

ovnk-controlplane

swift-ring-rebalance-gr69d

ErrorUpdatingResource

addLogicalPort failed for openstack/swift-ring-rebalance-gr69d: failed to update pod openstack/swift-ring-rebalance-gr69d: Operation cannot be fulfilled on pods "swift-ring-rebalance-gr69d": the object has been modified; please apply your changes to the latest version and try again

openstack

job-controller

swift-ring-rebalance

SuccessfulCreate

Created pod: swift-ring-rebalance-796n4

openstack

kubelet

root-account-create-update-gshbr

Started

Started container mariadb-account-create-update

openstack

kubelet

root-account-create-update-gshbr

Created

Created container: mariadb-account-create-update

openstack

multus

swift-ring-rebalance-796n4

AddedInterface

Add eth0 [10.128.0.185/23] from ovn-kubernetes

openstack

kubelet

swift-ring-rebalance-796n4

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:4edf7a23d172c4a133480d60844e80fb843a6aaf50a68d8d5fec13ae0c3c03d7"

openstack

job-controller

glance-db-create

SuccessfulCreate

Created pod: glance-db-create-8jd9b

openstack

job-controller

glance-3631-account-create-update

SuccessfulCreate

Created pod: glance-3631-account-create-update-8m8jf

openstack

job-controller

placement-326e-account-create-update

SuccessfulCreate

Created pod: placement-326e-account-create-update-g8dbq

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

job-controller

keystone-db-create

SuccessfulCreate

Created pod: keystone-db-create-q8tkd

openstack

job-controller

keystone-c490-account-create-update

SuccessfulCreate

Created pod: keystone-c490-account-create-update-rc6gq

openstack

job-controller

placement-db-create

SuccessfulCreate

Created pod: placement-db-create-shqgh

openstack

kubelet

dnsmasq-dns-58dc6c9559-pt84w

Killing

Stopping container dnsmasq-dns

openstack

kubelet

swift-ring-rebalance-796n4

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:4edf7a23d172c4a133480d60844e80fb843a6aaf50a68d8d5fec13ae0c3c03d7" in 4.14s (4.14s including waiting). Image size: 500402961 bytes.
(x5)

openstack

kubelet

swift-storage-0

FailedMount

MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found

openstack

replicaset-controller

dnsmasq-dns-58dc6c9559

SuccessfulDelete

Deleted pod: dnsmasq-dns-58dc6c9559-pt84w

openstack

kubelet

swift-ring-rebalance-796n4

Started

Started container swift-ring-rebalance

openstack

kubelet

swift-ring-rebalance-796n4

Created

Created container: swift-ring-rebalance

openstack

multus

keystone-db-create-q8tkd

AddedInterface

Add eth0 [10.128.0.188/23] from ovn-kubernetes

openstack

kubelet

glance-db-create-8jd9b

Started

Started container mariadb-database-create

openstack

multus

placement-326e-account-create-update-g8dbq

AddedInterface

Add eth0 [10.128.0.191/23] from ovn-kubernetes

openstack

multus

glance-3631-account-create-update-8m8jf

AddedInterface

Add eth0 [10.128.0.187/23] from ovn-kubernetes

openstack

multus

glance-db-create-8jd9b

AddedInterface

Add eth0 [10.128.0.186/23] from ovn-kubernetes

openstack

multus

placement-db-create-shqgh

AddedInterface

Add eth0 [10.128.0.190/23] from ovn-kubernetes

openstack

kubelet

glance-db-create-8jd9b

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

glance-db-create-8jd9b

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-db-create-q8tkd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

glance-3631-account-create-update-8m8jf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

placement-db-create-shqgh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

keystone-c490-account-create-update-rc6gq

Started

Started container mariadb-account-create-update

openstack

kubelet

keystone-c490-account-create-update-rc6gq

Created

Created container: mariadb-account-create-update

openstack

kubelet

keystone-c490-account-create-update-rc6gq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

multus

keystone-c490-account-create-update-rc6gq

AddedInterface

Add eth0 [10.128.0.189/23] from ovn-kubernetes

openstack

kubelet

placement-326e-account-create-update-g8dbq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

placement-db-create-shqgh

Started

Started container mariadb-database-create

openstack

kubelet

placement-db-create-shqgh

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-db-create-q8tkd

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-db-create-q8tkd

Started

Started container mariadb-database-create

openstack

kubelet

placement-326e-account-create-update-g8dbq

Started

Started container mariadb-account-create-update

openstack

kubelet

glance-3631-account-create-update-8m8jf

Started

Started container mariadb-account-create-update

openstack

kubelet

placement-326e-account-create-update-g8dbq

Created

Created container: mariadb-account-create-update

openstack

kubelet

glance-3631-account-create-update-8m8jf

Created

Created container: mariadb-account-create-update

openstack

job-controller

placement-db-create

Completed

Job completed

openstack

job-controller

glance-db-create

Completed

Job completed

openstack

job-controller

keystone-c490-account-create-update

Completed

Job completed

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-m995r

openstack

job-controller

placement-326e-account-create-update

Completed

Job completed

openstack

job-controller

keystone-db-create

Completed

Job completed

openstack

job-controller

glance-3631-account-create-update

Completed

Job completed

openstack

kubelet

root-account-create-update-m995r

Started

Started container mariadb-account-create-update

openstack

kubelet

root-account-create-update-m995r

Created

Created container: mariadb-account-create-update

openstack

multus

root-account-create-update-m995r

AddedInterface

Add eth0 [10.128.0.192/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-m995r

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

job-controller

glance-db-sync

SuccessfulCreate

Created pod: glance-db-sync-s9668

openstack

multus

glance-db-sync-s9668

AddedInterface

Add eth0 [10.128.0.193/23] from ovn-kubernetes

openstack

multus

glance-db-sync-s9668

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

glance-db-sync-s9668

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07"

openstack

multus

swift-storage-0

AddedInterface

Add eth0 [10.128.0.181/23] from ovn-kubernetes

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645"

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645" in 1.015s (1.015s including waiting). Image size: 445346822 bytes.

openstack

kubelet

swift-storage-0

Created

Created container: account-auditor

openstack

kubelet

swift-storage-0

Started

Started container account-server

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645" already present on machine

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

kubelet

swift-storage-0

Started

Started container account-auditor

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: account-replicator

openstack

kubelet

swift-storage-0

Started

Started container account-replicator

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: account-server

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:1af3cfa900bdb14059b01962275ab139988ff3d9056421fc6cea49f366b60c49"

openstack

kubelet

swift-storage-0

Started

Started container account-reaper

openstack

kubelet

swift-storage-0

Created

Created container: account-reaper

openstack

job-controller

swift-ring-rebalance

Completed

Job completed

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:1af3cfa900bdb14059b01962275ab139988ff3d9056421fc6cea49f366b60c49" in 1.31s (1.31s including waiting). Image size: 445362696 bytes.

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:1af3cfa900bdb14059b01962275ab139988ff3d9056421fc6cea49f366b60c49" already present on machine

openstack

kubelet

swift-storage-0

Started

Started container container-server

openstack

kubelet

swift-storage-0

Created

Created container: container-server

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container rabbitmq

openstack

kubelet

rabbitmq-server-0

Started

Started container rabbitmq

openstack

kubelet

swift-storage-0

Created

Created container: container-replicator

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" already present on machine

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: rabbitmq

openstack

kubelet

rabbitmq-server-0

Created

Created container: rabbitmq

openstack

kubelet

swift-storage-0

Started

Started container container-replicator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

kubelet

rabbitmq-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" already present on machine

openstack

job-controller

ovn-controller-wptpb-config

SuccessfulCreate

Created pod: ovn-controller-wptpb-config-x2v29
(x3)

openstack

kubelet

ovn-controller-wptpb

Unhealthy

Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status

openstack

rabbitmq-server-0/rabbitmq_peer_discovery

pod/rabbitmq-server-0

Created

Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered

openstack

rabbitmq-cell1-server-0/rabbitmq_peer_discovery

pod/rabbitmq-cell1-server-0

Created

Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered

openstack

kubelet

glance-db-sync-s9668

Created

Created container: glance-db-sync

openstack

kubelet

glance-db-sync-s9668

Started

Started container glance-db-sync

openstack

multus

ovn-controller-wptpb-config-x2v29

AddedInterface

Add eth0 [10.128.0.194/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-wptpb-config-x2v29

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe" already present on machine

openstack

kubelet

ovn-controller-wptpb-config-x2v29

Created

Created container: ovn-config

openstack

kubelet

ovn-controller-wptpb-config-x2v29

Started

Started container ovn-config

openstack

kubelet

glance-db-sync-s9668

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" in 14.485s (14.485s including waiting). Image size: 983190896 bytes.

openstack

replicaset-controller

dnsmasq-dns-6465c5fc85

SuccessfulCreate

Created pod: dnsmasq-dns-6465c5fc85-2kk4v

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

dnsmasq-dns-6465c5fc85-2kk4v

AddedInterface

Add eth0 [10.128.0.195/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Created

Created container: init

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Started

Started container init

openstack

job-controller

ovn-controller-wptpb-config

Completed

Job completed

openstack

metallb-speaker

rabbitmq-cell1

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

cinder-db-create

SuccessfulCreate

Created pod: cinder-db-create-5hn4x

openstack

metallb-speaker

rabbitmq

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

cinder-66da-account-create-update

SuccessfulCreate

Created pod: cinder-66da-account-create-update-cczxb

openstack

job-controller

neutron-b4a7-account-create-update

SuccessfulCreate

Created pod: neutron-b4a7-account-create-update-xgrjd

openstack

job-controller

keystone-db-sync

SuccessfulCreate

Created pod: keystone-db-sync-cmr5t

openstack

job-controller

neutron-db-create

SuccessfulCreate

Created pod: neutron-db-create-dkp4f

openstack

multus

cinder-db-create-5hn4x

AddedInterface

Add eth0 [10.128.0.196/23] from ovn-kubernetes

openstack

kubelet

cinder-db-create-5hn4x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

cinder-66da-account-create-update-cczxb

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

neutron-b4a7-account-create-update-xgrjd

Started

Started container mariadb-account-create-update

openstack

kubelet

neutron-db-create-dkp4f

Created

Created container: mariadb-database-create

openstack

multus

keystone-db-sync-cmr5t

AddedInterface

Add eth0 [10.128.0.199/23] from ovn-kubernetes

openstack

kubelet

neutron-db-create-dkp4f

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

multus

neutron-b4a7-account-create-update-xgrjd

AddedInterface

Add eth0 [10.128.0.200/23] from ovn-kubernetes

openstack

kubelet

neutron-b4a7-account-create-update-xgrjd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

keystone-db-sync-cmr5t

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:0eeb2759adca98fed8913fe00b0a87d706bde89efff3b5ef6d962bc3ca5204b0"

openstack

multus

cinder-66da-account-create-update-cczxb

AddedInterface

Add eth0 [10.128.0.197/23] from ovn-kubernetes

openstack

kubelet

cinder-db-create-5hn4x

Created

Created container: mariadb-database-create

openstack

kubelet

neutron-db-create-dkp4f

Started

Started container mariadb-database-create

openstack

kubelet

cinder-66da-account-create-update-cczxb

Started

Started container mariadb-account-create-update

openstack

kubelet

neutron-b4a7-account-create-update-xgrjd

Created

Created container: mariadb-account-create-update

openstack

multus

neutron-db-create-dkp4f

AddedInterface

Add eth0 [10.128.0.198/23] from ovn-kubernetes

openstack

kubelet

cinder-db-create-5hn4x

Started

Started container mariadb-database-create

openstack

kubelet

cinder-66da-account-create-update-cczxb

Created

Created container: mariadb-account-create-update

openstack

kubelet

dnsmasq-dns-d6c6c44c5-7fbfp

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-d6c6c44c5

SuccessfulDelete

Deleted pod: dnsmasq-dns-d6c6c44c5-7fbfp

openstack

kubelet

keystone-db-sync-cmr5t

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:0eeb2759adca98fed8913fe00b0a87d706bde89efff3b5ef6d962bc3ca5204b0" in 5.212s (5.212s including waiting). Image size: 520351243 bytes.

openstack

job-controller

cinder-db-create

Completed

Job completed

openstack

kubelet

keystone-db-sync-cmr5t

Started

Started container keystone-db-sync

openstack

kubelet

keystone-db-sync-cmr5t

Created

Created container: keystone-db-sync
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

job-controller

cinder-66da-account-create-update

Completed

Job completed

openstack

metallb-controller

glance-default-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-request-manager

glance-default-internal-svc

Requested

Created new CertificateRequest resource "glance-default-internal-svc-1"

openstack

job-controller

glance-db-sync

Completed

Job completed

openstack

cert-manager-certificates-key-manager

glance-default-internal-svc

Generated

Stored new private key in temporary Secret resource "glance-default-internal-svc-z495t"

openstack

job-controller

neutron-b4a7-account-create-update

Completed

Job completed

openstack

cert-manager-certificates-trigger

glance-default-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-5f5db5bd5

SuccessfulCreate

Created pod: dnsmasq-dns-5f5db5bd5-2tvbr

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

neutron-db-create

Completed

Job completed

openstack

kubelet

dnsmasq-dns-5f5db5bd5-2tvbr

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

glance-default-public-svc

Requested

Created new CertificateRequest resource "glance-default-public-svc-1"

openstack

cert-manager-certificates-key-manager

glance-default-public-svc

Generated

Stored new private key in temporary Secret resource "glance-default-public-svc-xk6t8"

openstack

kubelet

dnsmasq-dns-5f5db5bd5-2tvbr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificates-trigger

glance-default-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-5f5db5bd5-2tvbr

Started

Started container init

openstack

kubelet

dnsmasq-dns-5f5db5bd5-2tvbr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

dnsmasq-dns-5f5db5bd5-2tvbr

AddedInterface

Add eth0 [10.128.0.201/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-approver

glance-default-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

glance-default-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

glance-default-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

glance-default-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

glance-default-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-5f5db5bd5-2tvbr

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-5f5db5bd5-2tvbr

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

glance-default-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

glance-default-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

glance-default-public-route

Requested

Created new CertificateRequest resource "glance-default-public-route-1"

openstack

cert-manager-certificates-key-manager

glance-default-public-route

Generated

Stored new private key in temporary Secret resource "glance-default-public-route-kqvtz"

openstack

job-controller

keystone-db-sync

Completed

Job completed
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

persistentvolume-controller

glance-glance-213eb-default-internal-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

replicaset-controller

dnsmasq-dns-7bbc6577f5

SuccessfulCreate

Created pod: dnsmasq-dns-7bbc6577f5-mldsh

openstack

metallb-controller

placement-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

job-controller

neutron-db-sync

SuccessfulCreate

Created pod: neutron-db-sync-97jz8
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificates-trigger

keystone-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

replicaset-controller

dnsmasq-dns-5787b6ddf7

SuccessfulCreate

Created pod: dnsmasq-dns-5787b6ddf7-gjnck
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

job-controller

ironic-db-create

SuccessfulCreate

Created pod: ironic-db-create-j9hg2

openstack

replicaset-controller

dnsmasq-dns-5787b6ddf7

SuccessfulDelete

Deleted pod: dnsmasq-dns-5787b6ddf7-gjnck

openstack

job-controller

cinder-86971-db-sync

SuccessfulCreate

Created pod: cinder-86971-db-sync-m7xht

openstack

metallb-controller

keystone-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-m6pmp

openstack

statefulset-controller

glance-213eb-default-internal-api

SuccessfulCreate

create Claim glance-glance-213eb-default-internal-api-0 Pod glance-213eb-default-internal-api-0 in StatefulSet glance-213eb-default-internal-api success

openstack

job-controller

ironic-20ba-account-create-update

SuccessfulCreate

Created pod: ironic-20ba-account-create-update-4dtlr

openstack

replicaset-controller

dnsmasq-dns-5f5db5bd5

SuccessfulDelete

Deleted pod: dnsmasq-dns-5f5db5bd5-2tvbr
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

cert-manager-certificates-key-manager

keystone-internal-svc

Generated

Stored new private key in temporary Secret resource "keystone-internal-svc-jxc8w"
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

kubelet

dnsmasq-dns-5f5db5bd5-2tvbr

Killing

Stopping container dnsmasq-dns

openstack

statefulset-controller

glance-213eb-default-external-api

SuccessfulCreate

create Claim glance-glance-213eb-default-external-api-0 Pod glance-213eb-default-external-api-0 in StatefulSet glance-213eb-default-external-api success

openstack

job-controller

placement-db-sync

SuccessfulCreate

Created pod: placement-db-sync-4wkkv

openstack

persistentvolume-controller

glance-glance-213eb-default-internal-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

persistentvolume-controller

glance-glance-213eb-default-external-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

persistentvolume-controller

glance-glance-213eb-default-external-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

glance-glance-213eb-default-external-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-213eb-default-external-api-0"

openstack

cert-manager-certificaterequests-issuer-acme

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5787b6ddf7-gjnck

Created

Created container: init

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

glance-glance-213eb-default-external-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-4b3e46a7-10f2-435d-87b4-b6dd0a8c16d3

openstack

kubelet

dnsmasq-dns-5787b6ddf7-gjnck

Started

Started container init

openstack

cert-manager-certificates-trigger

keystone-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

keystone-bootstrap-m6pmp

AddedInterface

Add eth0 [10.128.0.202/23] from ovn-kubernetes

openstack

kubelet

keystone-bootstrap-m6pmp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:0eeb2759adca98fed8913fe00b0a87d706bde89efff3b5ef6d962bc3ca5204b0" already present on machine

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

glance-glance-213eb-default-internal-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-213eb-default-internal-api-0"

openstack

cert-manager-certificaterequests-issuer-vault

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

ironic-db-create-j9hg2

AddedInterface

Add eth0 [10.128.0.204/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

keystone-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

keystone-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

keystone-internal-svc

Requested

Created new CertificateRequest resource "keystone-internal-svc-1"

openstack

multus

dnsmasq-dns-5787b6ddf7-gjnck

AddedInterface

Add eth0 [10.128.0.203/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-5787b6ddf7-gjnck

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

keystone-public-svc

Requested

Created new CertificateRequest resource "keystone-public-svc-1"

openstack

kubelet

neutron-db-sync-97jz8

Created

Created container: neutron-db-sync

openstack

kubelet

cinder-86971-db-sync-m7xht

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838"

openstack

multus

cinder-86971-db-sync-m7xht

AddedInterface

Add eth0 [10.128.0.205/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

keystone-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

keystone-bootstrap-m6pmp

Started

Started container keystone-bootstrap

openstack

kubelet

placement-db-sync-4wkkv

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:11d4431e4af1735fbd9d425596f81dd62b0ca934d84d7c4e67902656c2b688d3"

openstack

multus

placement-db-sync-4wkkv

AddedInterface

Add eth0 [10.128.0.208/23] from ovn-kubernetes

openstack

kubelet

keystone-bootstrap-m6pmp

Created

Created container: keystone-bootstrap

openstack

multus

dnsmasq-dns-7bbc6577f5-mldsh

AddedInterface

Add eth0 [10.128.0.209/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificates-issuing

keystone-public-svc

Issuing

The certificate has been successfully issued

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

glance-glance-213eb-default-internal-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-2828e4cd-2480-4309-bb23-a8e5342365ce

openstack

multus

ironic-20ba-account-create-update-4dtlr

AddedInterface

Add eth0 [10.128.0.207/23] from ovn-kubernetes

openstack

cert-manager-certificates-key-manager

keystone-public-svc

Generated

Stored new private key in temporary Secret resource "keystone-public-svc-hm9tf"

openstack

kubelet

neutron-db-sync-97jz8

Started

Started container neutron-db-sync

openstack

cert-manager-certificaterequests-approver

keystone-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

ironic-db-create-j9hg2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

ironic-db-create-j9hg2

Created

Created container: mariadb-database-create

openstack

kubelet

ironic-db-create-j9hg2

Started

Started container mariadb-database-create

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-db-sync-97jz8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

neutron-db-sync-97jz8

AddedInterface

Add eth0 [10.128.0.206/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

keystone-public-route

Generated

Stored new private key in temporary Secret resource "keystone-public-route-nrzr4"

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

keystone-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

keystone-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Started

Started container init

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-20ba-account-create-update-4dtlr

Started

Started container mariadb-account-create-update

openstack

kubelet

ironic-20ba-account-create-update-4dtlr

Created

Created container: mariadb-account-create-update

openstack

kubelet

ironic-20ba-account-create-update-4dtlr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

cert-manager-certificates-request-manager

keystone-public-route

Requested

Created new CertificateRequest resource "keystone-public-route-1"

openstack

cert-manager-certificates-trigger

placement-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

placement-internal-svc

Generated

Stored new private key in temporary Secret resource "placement-internal-svc-q2cfd"

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

placement-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

placement-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

placement-internal-svc

Requested

Created new CertificateRequest resource "placement-internal-svc-1"

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

placement-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

placement-public-route

Requested

Created new CertificateRequest resource "placement-public-route-1"

openstack

cert-manager-certificates-key-manager

placement-public-route

Generated

Stored new private key in temporary Secret resource "placement-public-route-8g2cw"

openstack

cert-manager-certificates-trigger

placement-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

placement-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

placement-public-svc

Generated

Stored new private key in temporary Secret resource "placement-public-svc-rtn87"

openstack

cert-manager-certificates-request-manager

placement-public-svc

Requested

Created new CertificateRequest resource "placement-public-svc-1"

openstack

cert-manager-certificates-issuing

placement-public-svc

Issuing

The certificate has been successfully issued

openstack

job-controller

ironic-db-create

Completed

Job completed

openstack

multus

glance-213eb-default-internal-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

placement-db-sync-4wkkv

Created

Created container: placement-db-sync

openstack

multus

glance-213eb-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.212/23] from ovn-kubernetes

openstack

kubelet

placement-db-sync-4wkkv

Started

Started container placement-db-sync

openstack

kubelet

glance-213eb-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

kubelet

placement-db-sync-4wkkv

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:11d4431e4af1735fbd9d425596f81dd62b0ca934d84d7c4e67902656c2b688d3" in 5.112s (5.112s including waiting). Image size: 472931542 bytes.

openstack

kubelet

glance-213eb-default-internal-api-0

Started

Started container glance-httpd

openstack

multus

glance-213eb-default-external-api-0

AddedInterface

Add eth0 [10.128.0.213/23] from ovn-kubernetes

openstack

kubelet

glance-213eb-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

kubelet

glance-213eb-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

glance-213eb-default-internal-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-213eb-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

multus

glance-213eb-default-external-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

job-controller

ironic-20ba-account-create-update

Completed

Job completed

openstack

kubelet

glance-213eb-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-213eb-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

glance-213eb-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

kubelet

glance-213eb-default-internal-api-0

Started

Started container glance-log

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

kubelet

glance-213eb-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-213eb-default-external-api-0

Started

Started container glance-httpd

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Killing

Stopping container dnsmasq-dns

openstack

job-controller

ironic-db-sync

SuccessfulCreate

Created pod: ironic-db-sync-mtvqh

openstack

replicaset-controller

dnsmasq-dns-6465c5fc85

SuccessfulDelete

Deleted pod: dnsmasq-dns-6465c5fc85-2kk4v

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-zh2n5

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Unhealthy

Readiness probe failed: dial tcp 10.128.0.195:5353: connect: connection refused

openstack

kubelet

cinder-86971-db-sync-m7xht

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" in 19.513s (19.513s including waiting). Image size: 1161387303 bytes.

openstack

kubelet

keystone-bootstrap-zh2n5

Started

Started container keystone-bootstrap

openstack

kubelet

cinder-86971-db-sync-m7xht

Created

Created container: cinder-86971-db-sync

openstack

kubelet

ironic-db-sync-mtvqh

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:7dc1464a072b28d5bc1e10127f898f61e85cffb63a67a51a15fd01322da295fa"

openstack

kubelet

cinder-86971-db-sync-m7xht

Started

Started container cinder-86971-db-sync

openstack

kubelet

keystone-bootstrap-zh2n5

Created

Created container: keystone-bootstrap

openstack

kubelet

keystone-bootstrap-zh2n5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:0eeb2759adca98fed8913fe00b0a87d706bde89efff3b5ef6d962bc3ca5204b0" already present on machine

openstack

multus

keystone-bootstrap-zh2n5

AddedInterface

Add eth0 [10.128.0.214/23] from ovn-kubernetes

openstack

multus

ironic-db-sync-mtvqh

AddedInterface

Add eth0 [10.128.0.215/23] from ovn-kubernetes

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-6cc7544794 to 1

openstack

replicaset-controller

placement-6cc7544794

SuccessfulCreate

Created pod: placement-6cc7544794-vmcq4

openstack

job-controller

placement-db-sync

Completed

Job completed

openstack

multus

placement-6cc7544794-vmcq4

AddedInterface

Add eth0 [10.128.0.216/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6465c5fc85-2kk4v

Unhealthy

Readiness probe failed: dial tcp 10.128.0.195:5353: i/o timeout

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

Unhealthy

Readiness probe failed: Get "https://10.128.0.25:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

placement-6cc7544794-vmcq4

Created

Created container: placement-log

openstack

kubelet

ironic-db-sync-mtvqh

Started

Started container init

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7d9c49f57b-j454x

ProbeError

Readiness probe error: Get "https://10.128.0.25:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openstack

kubelet

ironic-db-sync-mtvqh

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:7dc1464a072b28d5bc1e10127f898f61e85cffb63a67a51a15fd01322da295fa" in 8.244s (8.244s including waiting). Image size: 599253577 bytes.

openstack

kubelet

placement-6cc7544794-vmcq4

Started

Started container placement-api

openstack

kubelet

placement-6cc7544794-vmcq4

Created

Created container: placement-api

openstack

kubelet

ironic-db-sync-mtvqh

Created

Created container: init

openstack

kubelet

placement-6cc7544794-vmcq4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:11d4431e4af1735fbd9d425596f81dd62b0ca934d84d7c4e67902656c2b688d3" already present on machine

openstack

kubelet

placement-6cc7544794-vmcq4

Started

Started container placement-log

openstack

kubelet

placement-6cc7544794-vmcq4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:11d4431e4af1735fbd9d425596f81dd62b0ca934d84d7c4e67902656c2b688d3" already present on machine

openstack

kubelet

ironic-db-sync-mtvqh

Created

Created container: ironic-db-sync

openstack

kubelet

ironic-db-sync-mtvqh

Started

Started container ironic-db-sync

openstack

kubelet

ironic-db-sync-mtvqh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:7dc1464a072b28d5bc1e10127f898f61e85cffb63a67a51a15fd01322da295fa" already present on machine

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

replicaset-controller

keystone-798d5f97fb

SuccessfulCreate

Created pod: keystone-798d5f97fb-2sbnv

openstack

deployment-controller

keystone

ScalingReplicaSet

Scaled up replica set keystone-798d5f97fb to 1

openstack

kubelet

keystone-798d5f97fb-2sbnv

Created

Created container: keystone-api

openstack

kubelet

keystone-798d5f97fb-2sbnv

Started

Started container keystone-api

openstack

kubelet

keystone-798d5f97fb-2sbnv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:0eeb2759adca98fed8913fe00b0a87d706bde89efff3b5ef6d962bc3ca5204b0" already present on machine

openstack

multus

keystone-798d5f97fb-2sbnv

AddedInterface

Add eth0 [10.128.0.217/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-589dd8c5c

SuccessfulCreate

Created pod: dnsmasq-dns-589dd8c5c-bm6b7

openstack

job-controller

cinder-86971-db-sync

Completed

Job completed
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

metallb-controller

cinder-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-key-manager

cinder-internal-svc

Generated

Stored new private key in temporary Secret resource "cinder-internal-svc-4s5sk"

openstack

multus

cinder-86971-scheduler-0

AddedInterface

Add eth0 [10.128.0.218/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-approver

cinder-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

cinder-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-internal-svc

Requested

Created new CertificateRequest resource "cinder-internal-svc-1"

openstack

cert-manager-certificates-trigger

cinder-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

cinder-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

cinder-86971-backup-0

AddedInterface

Add eth0 [10.128.0.221/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-589dd8c5c-bm6b7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-589dd8c5c-bm6b7

Started

Started container init

openstack

kubelet

dnsmasq-dns-589dd8c5c-bm6b7

Created

Created container: init

openstack

kubelet

dnsmasq-dns-589dd8c5c-bm6b7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

dnsmasq-dns-589dd8c5c-bm6b7

AddedInterface

Add eth0 [10.128.0.219/23] from ovn-kubernetes

openstack

cert-manager-certificates-issuing

cinder-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-public-svc

Requested

Created new CertificateRequest resource "cinder-public-svc-1"

openstack

cert-manager-certificates-key-manager

cinder-public-svc

Generated

Stored new private key in temporary Secret resource "cinder-public-svc-f7hhk"

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

cinder-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

cinder-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:46e8a618b5c7be7e112d51c61b7a55c4ece540d5a7adc2b1718364a79d3fe60c"

openstack

multus

cinder-86971-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.220/23] from ovn-kubernetes

openstack

kubelet

cinder-86971-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9b872cb203a96eecc6a17a2d4bb8ff46d1466c7be3000cddd68be89f74016777" in 874ms (874ms including waiting). Image size: 1083250334 bytes.

openstack

kubelet

cinder-86971-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9b872cb203a96eecc6a17a2d4bb8ff46d1466c7be3000cddd68be89f74016777"

openstack

kubelet

cinder-86971-backup-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b6653696345c9de7573839c805c94401dd032a8d30027218576406bf4654806e"

openstack

multus

cinder-86971-api-0

AddedInterface

Add eth0 [10.128.0.222/23] from ovn-kubernetes

openstack

multus

cinder-86971-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

kubelet

cinder-86971-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" already present on machine

openstack

cert-manager-certificates-key-manager

cinder-public-route

Generated

Stored new private key in temporary Secret resource "cinder-public-route-jvwhp"

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

neutron-db-sync

Completed

Job completed

openstack

statefulset-controller

cinder-86971-api

SuccessfulDelete

delete Pod cinder-86971-api-0 in StatefulSet cinder-86971-api successful

openstack

replicaset-controller

dnsmasq-dns-589dd8c5c

SuccessfulDelete

Deleted pod: dnsmasq-dns-589dd8c5c-bm6b7
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

kubelet

dnsmasq-dns-589dd8c5c-bm6b7

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-589dd8c5c-bm6b7

Created

Created container: dnsmasq-dns

openstack

metallb-controller

neutron-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-issuing

cinder-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-public-route

Requested

Created new CertificateRequest resource "cinder-public-route-1"

openstack

kubelet

cinder-86971-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

cinder-86971-backup-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b6653696345c9de7573839c805c94401dd032a8d30027218576406bf4654806e" in 934ms (934ms including waiting). Image size: 1083255579 bytes.

openstack

kubelet

cinder-86971-api-0

Created

Created container: cinder-86971-api-log

openstack

kubelet

cinder-86971-api-0

Started

Started container cinder-86971-api-log

openstack

kubelet

cinder-86971-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" already present on machine

openstack

replicaset-controller

dnsmasq-dns-7fb78888f7

SuccessfulCreate

Created pod: dnsmasq-dns-7fb78888f7-pwtc8

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:46e8a618b5c7be7e112d51c61b7a55c4ece540d5a7adc2b1718364a79d3fe60c" already present on machine

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

cert-manager-certificaterequests-approver

cinder-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:46e8a618b5c7be7e112d51c61b7a55c4ece540d5a7adc2b1718364a79d3fe60c" in 1.009s (1.009s including waiting). Image size: 1084192222 bytes.

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-86971-scheduler-0

Started

Started container cinder-scheduler

openstack

replicaset-controller

neutron-f49f69884

SuccessfulCreate

Created pod: neutron-f49f69884-v8xz2

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-f49f69884 to 1

openstack

kubelet

cinder-86971-backup-0

Started

Started container probe

openstack

kubelet

cinder-86971-backup-0

Created

Created container: probe

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

cinder-86971-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b6653696345c9de7573839c805c94401dd032a8d30027218576406bf4654806e" already present on machine

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-86971-api-0

Created

Created container: cinder-api

openstack

kubelet

cinder-86971-backup-0

Started

Started container cinder-backup

openstack

kubelet

cinder-86971-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9b872cb203a96eecc6a17a2d4bb8ff46d1466c7be3000cddd68be89f74016777" already present on machine

openstack

kubelet

cinder-86971-backup-0

Created

Created container: cinder-backup

openstack

kubelet

dnsmasq-dns-7fb78888f7-pwtc8

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7fb78888f7-pwtc8

Started

Started container init

openstack

kubelet

neutron-f49f69884-v8xz2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

cinder-86971-api-0

Killing

Stopping container cinder-86971-api-log

openstack

multus

neutron-f49f69884-v8xz2

AddedInterface

Add eth0 [10.128.0.224/23] from ovn-kubernetes

openstack

kubelet

cinder-86971-scheduler-0

Created

Created container: probe

openstack

cert-manager-certificates-issuing

neutron-internal-svc

Issuing

The certificate has been successfully issued

openstack

multus

neutron-f49f69884-v8xz2

AddedInterface

Add internalapi [172.17.0.32/24] from openstack/internalapi

openstack

kubelet

cinder-86971-api-0

Killing

Stopping container cinder-api

openstack

kubelet

dnsmasq-dns-7fb78888f7-pwtc8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

dnsmasq-dns-7fb78888f7-pwtc8

AddedInterface

Add eth0 [10.128.0.223/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-86971-scheduler-0

Started

Started container probe

openstack

cert-manager-certificates-request-manager

neutron-internal-svc

Requested

Created new CertificateRequest resource "neutron-internal-svc-1"

openstack

cert-manager-certificates-key-manager

neutron-internal-svc

Generated

Stored new private key in temporary Secret resource "neutron-internal-svc-n5d22"

openstack

kubelet

cinder-86971-api-0

Started

Started container cinder-api

openstack

cert-manager-certificates-trigger

neutron-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-589dd8c5c-bm6b7

Killing

Stopping container dnsmasq-dns

openstack

cert-manager-certificates-trigger

neutron-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

neutron-f49f69884-v8xz2

Started

Started container neutron-httpd

openstack

kubelet

neutron-f49f69884-v8xz2

Created

Created container: neutron-httpd

openstack

kubelet

neutron-f49f69884-v8xz2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificates-issuing

neutron-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

neutron-f49f69884-v8xz2

Started

Started container neutron-api

openstack

kubelet

neutron-f49f69884-v8xz2

Created

Created container: neutron-api

openstack

cert-manager-certificates-request-manager

neutron-public-svc

Requested

Created new CertificateRequest resource "neutron-public-svc-1"

openstack

cert-manager-certificates-key-manager

neutron-public-svc

Generated

Stored new private key in temporary Secret resource "neutron-public-svc-cphsc"

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

neutron-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

neutron-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-approver

neutron-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

neutron-public-route

Generated

Stored new private key in temporary Secret resource "neutron-public-route-h96f5"

openstack

kubelet

dnsmasq-dns-7fb78888f7-pwtc8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

neutron-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7fb78888f7-pwtc8

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-request-manager

neutron-public-route

Requested

Created new CertificateRequest resource "neutron-public-route-1"
(x25)

openstack

metallb-speaker

dnsmasq-dns

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x2)

openstack

statefulset-controller

cinder-86971-api

SuccessfulCreate

create Pod cinder-86971-api-0 in StatefulSet cinder-86971-api successful

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-fd8d8c7c7 to 1

openstack

kubelet

dnsmasq-dns-7fb78888f7-pwtc8

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-86971-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" already present on machine

openstack

replicaset-controller

neutron-fd8d8c7c7

SuccessfulCreate

Created pod: neutron-fd8d8c7c7-w5vwh

openstack

multus

cinder-86971-api-0

AddedInterface

Add eth0 [10.128.0.225/23] from ovn-kubernetes

openstack

kubelet

neutron-fd8d8c7c7-w5vwh

Started

Started container neutron-httpd

openstack

kubelet

neutron-fd8d8c7c7-w5vwh

Created

Created container: neutron-httpd

openstack

kubelet

neutron-fd8d8c7c7-w5vwh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

cinder-86971-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" already present on machine

openstack

kubelet

cinder-86971-api-0

Started

Started container cinder-86971-api-log

openstack

kubelet

cinder-86971-api-0

Created

Created container: cinder-86971-api-log

openstack

multus

neutron-fd8d8c7c7-w5vwh

AddedInterface

Add internalapi [172.17.0.33/24] from openstack/internalapi

openstack

multus

neutron-fd8d8c7c7-w5vwh

AddedInterface

Add eth0 [10.128.0.226/23] from ovn-kubernetes

openstack

kubelet

neutron-fd8d8c7c7-w5vwh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

neutron-fd8d8c7c7-w5vwh

Started

Started container neutron-api

openstack

kubelet

neutron-fd8d8c7c7-w5vwh

Created

Created container: neutron-api

openstack

kubelet

cinder-86971-api-0

Created

Created container: cinder-api

openstack

kubelet

cinder-86971-api-0

Started

Started container cinder-api

openstack

statefulset-controller

cinder-86971-volume-lvm-iscsi

SuccessfulDelete

delete Pod cinder-86971-volume-lvm-iscsi-0 in StatefulSet cinder-86971-volume-lvm-iscsi successful

openstack

statefulset-controller

cinder-86971-scheduler

SuccessfulDelete

delete Pod cinder-86971-scheduler-0 in StatefulSet cinder-86971-scheduler successful

openstack

replicaset-controller

ironic-neutron-agent-89874fdc8

SuccessfulCreate

Created pod: ironic-neutron-agent-89874fdc8-kjtzj

openstack

deployment-controller

ironic-neutron-agent

ScalingReplicaSet

Scaled up replica set ironic-neutron-agent-89874fdc8 to 1

openstack

replicaset-controller

dnsmasq-dns-699fc4cfdf

SuccessfulCreate

Created pod: dnsmasq-dns-699fc4cfdf-cmxnl

openstack

kubelet

cinder-86971-scheduler-0

Killing

Stopping container cinder-scheduler

openstack

job-controller

ironic-inspector-db-create

SuccessfulCreate

Created pod: ironic-inspector-db-create-sdzv8

openstack

statefulset-controller

cinder-86971-backup

SuccessfulDelete

delete Pod cinder-86971-backup-0 in StatefulSet cinder-86971-backup successful

openstack

kubelet

cinder-86971-scheduler-0

Killing

Stopping container probe

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

kubelet

dnsmasq-dns-7fb78888f7-pwtc8

Killing

Stopping container dnsmasq-dns

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Killing

Stopping container probe

openstack

replicaset-controller

dnsmasq-dns-7fb78888f7

SuccessfulDelete

Deleted pod: dnsmasq-dns-7fb78888f7-pwtc8

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Killing

Stopping container cinder-volume
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful

openstack

metallb-controller

ironic-internal

IPAllocated

Assigned IP ["172.20.1.80"]

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

job-controller

ironic-inspector-d904-account-create-update

SuccessfulCreate

Created pod: ironic-inspector-d904-account-create-update-tc485

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

var-lib-ironic-ironic-conductor-0

Provisioning

External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0"

openstack

job-controller

ironic-db-sync

Completed

Job completed

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ironic-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-internal-svc-7zws4"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

cinder-86971-backup-0

Killing

Stopping container probe

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-f97759bbc to 1

openstack

cert-manager-certificaterequests-issuer-vault

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x25)

openstack

metallb-speaker

dnsmasq-dns-ironic

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

cert-manager-certificaterequests-issuer-acme

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-86971-backup-0

Killing

Stopping container cinder-backup

openstack

cert-manager-certificates-request-manager

ironic-internal-svc

Requested

Created new CertificateRequest resource "ironic-internal-svc-1"

openstack

cert-manager-certificaterequests-approver

ironic-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

replicaset-controller

ironic-f97759bbc

SuccessfulCreate

Created pod: ironic-f97759bbc-nbv8w

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ironic-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ironic-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-public-svc-hxpds"

openstack

cert-manager-certificates-issuing

ironic-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:94ca191a8eb2bb9060456016091883ff81af6060e5e63191bca57f60f0f48f77"

openstack

cert-manager-certificates-issuing

ironic-public-svc

Issuing

The certificate has been successfully issued

openstack

multus

ironic-neutron-agent-89874fdc8-kjtzj

AddedInterface

Add eth0 [10.128.0.228/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

ironic-public-svc

Requested

Created new CertificateRequest resource "ironic-public-svc-1"

openstack

cert-manager-certificates-trigger

ironic-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-699fc4cfdf-cmxnl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

dnsmasq-dns-699fc4cfdf-cmxnl

AddedInterface

Add eth0 [10.128.0.230/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ironic-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

ironic-inspector-db-create-sdzv8

AddedInterface

Add eth0 [10.128.0.227/23] from ovn-kubernetes

openstack

kubelet

ironic-inspector-db-create-sdzv8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

topolvm.io_lvms-operator-cc6c44d98-tvcmb_6021d44c-b54c-48ab-939c-95c21bdc538a

var-lib-ironic-ironic-conductor-0

ProvisioningSucceeded

Successfully provisioned volume pvc-f8d69209-b4b7-4f2f-ae57-1537b9cc303f

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

ironic-f97759bbc-nbv8w

AddedInterface

Add eth0 [10.128.0.231/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-d904-account-create-update-tc485

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-699fc4cfdf-cmxnl

Created

Created container: init

openstack

kubelet

ironic-inspector-db-create-sdzv8

Created

Created container: mariadb-database-create

openstack

kubelet

ironic-inspector-db-create-sdzv8

Started

Started container mariadb-database-create

openstack

kubelet

dnsmasq-dns-699fc4cfdf-cmxnl

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-f97759bbc-nbv8w

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:2a7b09406cd6fe8e40f1712aa7a8836530959c80b4f48553677964546875f12f"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

ironic-inspector-d904-account-create-update-tc485

AddedInterface

Add eth0 [10.128.0.229/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-approver

ironic-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x2)

openstack

statefulset-controller

cinder-86971-volume-lvm-iscsi

SuccessfulCreate

create Pod cinder-86971-volume-lvm-iscsi-0 in StatefulSet cinder-86971-volume-lvm-iscsi successful

openstack

cert-manager-certificates-trigger

ironic-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

ironic-public-route

Requested

Created new CertificateRequest resource "ironic-public-route-1"

openstack

cert-manager-certificates-key-manager

ironic-public-route

Generated

Stored new private key in temporary Secret resource "ironic-public-route-64224"

openstack

cert-manager-certificates-issuing

ironic-public-route

Issuing

The certificate has been successfully issued

openstack

multus

cinder-86971-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.233/23] from ovn-kubernetes
(x2)

openstack

statefulset-controller

cinder-86971-backup

SuccessfulCreate

create Pod cinder-86971-backup-0 in StatefulSet cinder-86971-backup successful

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:46e8a618b5c7be7e112d51c61b7a55c4ece540d5a7adc2b1718364a79d3fe60c" already present on machine

openstack

kubelet

dnsmasq-dns-699fc4cfdf-cmxnl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine
(x2)

openstack

statefulset-controller

cinder-86971-scheduler

SuccessfulCreate

create Pod cinder-86971-scheduler-0 in StatefulSet cinder-86971-scheduler successful

openstack

kubelet

ironic-inspector-d904-account-create-update-tc485

Created

Created container: mariadb-account-create-update

openstack

kubelet

ironic-inspector-d904-account-create-update-tc485

Started

Started container mariadb-account-create-update

openstack

replicaset-controller

ironic-6767bc4dd7

SuccessfulCreate

Created pod: ironic-6767bc4dd7-cp8fn

openstack

kubelet

dnsmasq-dns-699fc4cfdf-cmxnl

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-699fc4cfdf-cmxnl

Created

Created container: dnsmasq-dns

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-6767bc4dd7 to 1

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-5dbd89f674 to 1

openstack

replicaset-controller

placement-5dbd89f674

SuccessfulCreate

Created pod: placement-5dbd89f674-7gtrq

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:46e8a618b5c7be7e112d51c61b7a55c4ece540d5a7adc2b1718364a79d3fe60c" already present on machine

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:94ca191a8eb2bb9060456016091883ff81af6060e5e63191bca57f60f0f48f77" in 4.642s (4.642s including waiting). Image size: 655324502 bytes.

openstack

job-controller

ironic-inspector-db-create

Completed

Job completed

openstack

kubelet

ironic-f97759bbc-nbv8w

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:2a7b09406cd6fe8e40f1712aa7a8836530959c80b4f48553677964546875f12f" in 4.102s (4.102s including waiting). Image size: 536338720 bytes.

openstack

kubelet

cinder-86971-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9b872cb203a96eecc6a17a2d4bb8ff46d1466c7be3000cddd68be89f74016777" already present on machine

openstack

multus

placement-5dbd89f674-7gtrq

AddedInterface

Add eth0 [10.128.0.237/23] from ovn-kubernetes

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Started

Started container probe

openstack

multus

cinder-86971-scheduler-0

AddedInterface

Add eth0 [10.128.0.234/23] from ovn-kubernetes

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:7dc1464a072b28d5bc1e10127f898f61e85cffb63a67a51a15fd01322da295fa" already present on machine

openstack

multus

ironic-conductor-0

AddedInterface

Add eth0 [10.128.0.232/23] from ovn-kubernetes

openstack

kubelet

placement-5dbd89f674-7gtrq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:11d4431e4af1735fbd9d425596f81dd62b0ca934d84d7c4e67902656c2b688d3" already present on machine

openstack

multus

cinder-86971-backup-0

AddedInterface

Add eth0 [10.128.0.235/23] from ovn-kubernetes

openstack

kubelet

ironic-f97759bbc-nbv8w

Started

Started container init

openstack

kubelet

ironic-f97759bbc-nbv8w

Created

Created container: init

openstack

metallb-speaker

cinder-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

multus

ironic-conductor-0

AddedInterface

Add ironic [172.20.1.31/24] from openstack/ironic

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:2a7b09406cd6fe8e40f1712aa7a8836530959c80b4f48553677964546875f12f" already present on machine

openstack

multus

cinder-86971-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

multus

ironic-6767bc4dd7-cp8fn

AddedInterface

Add eth0 [10.128.0.236/23] from ovn-kubernetes

openstack

kubelet

cinder-86971-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

cinder-86971-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b6653696345c9de7573839c805c94401dd032a8d30027218576406bf4654806e" already present on machine

openstack

kubelet

placement-5dbd89f674-7gtrq

Created

Created container: placement-log

openstack

kubelet

cinder-86971-backup-0

Started

Started container cinder-backup

openstack

kubelet

placement-5dbd89f674-7gtrq

Started

Started container placement-log

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Created

Created container: init

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Started

Started container init

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:2a7b09406cd6fe8e40f1712aa7a8836530959c80b4f48553677964546875f12f" already present on machine

openstack

kubelet

cinder-86971-backup-0

Created

Created container: cinder-backup

openstack

kubelet

cinder-86971-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b6653696345c9de7573839c805c94401dd032a8d30027218576406bf4654806e" already present on machine

openstack

kubelet

cinder-86971-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

placement-5dbd89f674-7gtrq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:11d4431e4af1735fbd9d425596f81dd62b0ca934d84d7c4e67902656c2b688d3" already present on machine

openstack

kubelet

placement-5dbd89f674-7gtrq

Created

Created container: placement-api

openstack

kubelet

cinder-86971-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9b872cb203a96eecc6a17a2d4bb8ff46d1466c7be3000cddd68be89f74016777" already present on machine

openstack

kubelet

placement-5dbd89f674-7gtrq

Started

Started container placement-api

openstack

kubelet

ironic-f97759bbc-nbv8w

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:2a7b09406cd6fe8e40f1712aa7a8836530959c80b4f48553677964546875f12f" already present on machine

openstack

kubelet

ironic-conductor-0

Started

Started container init

openstack

kubelet

ironic-conductor-0

Created

Created container: init

openstack

job-controller

ironic-inspector-d904-account-create-update

Completed

Job completed

openstack

kubelet

cinder-86971-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

cinder-86971-scheduler-0

Started

Started container probe

openstack

kubelet

cinder-86971-backup-0

Created

Created container: probe

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Started

Started container ironic-api

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Created

Created container: ironic-api

openstack

kubelet

cinder-86971-backup-0

Started

Started container probe

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Created

Created container: ironic-api-log

openstack

kubelet

ironic-f97759bbc-nbv8w

Started

Started container ironic-api-log

openstack

kubelet

ironic-f97759bbc-nbv8w

Created

Created container: ironic-api-log

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Started

Started container ironic-api-log

openstack

kubelet

ironic-6767bc4dd7-cp8fn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:2a7b09406cd6fe8e40f1712aa7a8836530959c80b4f48553677964546875f12f" already present on machine

openstack

kubelet

cinder-86971-scheduler-0

Created

Created container: probe

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-7bbc6577f5

SuccessfulDelete

Deleted pod: dnsmasq-dns-7bbc6577f5-mldsh

openstack

kubelet

dnsmasq-dns-7bbc6577f5-mldsh

Unhealthy

Readiness probe failed: dial tcp 10.128.0.209:5353: connect: connection refused
(x2)

openstack

kubelet

ironic-f97759bbc-nbv8w

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:2a7b09406cd6fe8e40f1712aa7a8836530959c80b4f48553677964546875f12f" already present on machine

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:0dea7fa3ede253c003e819495c9c017380d89d6827460d7d98ec4d6447440797"
(x2)

openstack

kubelet

ironic-f97759bbc-nbv8w

Created

Created container: ironic-api
(x2)

openstack

kubelet

ironic-f97759bbc-nbv8w

Started

Started container ironic-api

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

Unhealthy

Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 7cbabceeaaf5d4439754f51f551dc58268ac32bc0a835865f49e6eb911d57031 is running failed: container process not found

openstack

metallb-speaker

keystone-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

ironic-inspector-db-sync

SuccessfulCreate

Created pod: ironic-inspector-db-sync-hst88
(x3)

openstack

kubelet

ironic-f97759bbc-nbv8w

BackOff

Back-off restarting failed container ironic-api in pod ironic-f97759bbc-nbv8w_openstack(9e3ae5f4-4a11-4c09-9831-effc4a588f9b)

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

Unhealthy

Liveness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 7cbabceeaaf5d4439754f51f551dc58268ac32bc0a835865f49e6eb911d57031 is running failed: container process not found

openstack

multus

ironic-inspector-db-sync-hst88

AddedInterface

Add eth0 [10.128.0.238/23] from ovn-kubernetes

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled down replica set ironic-f97759bbc to 0 from 1

openstack

replicaset-controller

ironic-f97759bbc

SuccessfulDelete

Deleted pod: ironic-f97759bbc-nbv8w

openstack

kubelet

ironic-f97759bbc-nbv8w

Killing

Stopping container ironic-api-log

openstack

kubelet

ironic-inspector-db-sync-hst88

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:ca987f85e8c33f1a9dd33dd0e8422facaf1f38aa1fa48ce8357ff541645e74db"
(x3)

openstack

metallb-speaker

ironic-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

ironic-inspector-db-sync-hst88

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:ca987f85e8c33f1a9dd33dd0e8422facaf1f38aa1fa48ce8357ff541645e74db" in 2.502s (2.502s including waiting). Image size: 539743830 bytes.

openstack

kubelet

ironic-inspector-db-sync-hst88

Started

Started container ironic-inspector-db-sync

openstack

multus

openstackclient

AddedInterface

Add eth0 [10.128.0.239/23] from ovn-kubernetes

openstack

kubelet

ironic-inspector-db-sync-hst88

Created

Created container: ironic-inspector-db-sync

openstack

kubelet

openstackclient

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b0a455c844ca790160c48aed1aaf8bc69ceb4b9ed4a4fa1717114e6e2e2fda9"
(x2)

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

BackOff

Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-89874fdc8-kjtzj_openstack(55b7e31a-1da5-4528-b904-db7de86e1f26)

openstack

replicaset-controller

swift-proxy-7b675b8b94

SuccessfulCreate

Created pod: swift-proxy-7b675b8b94-rfvgr

openstack

deployment-controller

swift-proxy

ScalingReplicaSet

Scaled up replica set swift-proxy-7b675b8b94 to 1

openstack

kubelet

swift-proxy-7b675b8b94-rfvgr

Created

Created container: proxy-httpd

openstack

kubelet

swift-proxy-7b675b8b94-rfvgr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:4edf7a23d172c4a133480d60844e80fb843a6aaf50a68d8d5fec13ae0c3c03d7" already present on machine

openstack

multus

swift-proxy-7b675b8b94-rfvgr

AddedInterface

Add eth0 [10.128.0.240/23] from ovn-kubernetes

openstack

kubelet

swift-proxy-7b675b8b94-rfvgr

Created

Created container: proxy-server

openstack

kubelet

swift-proxy-7b675b8b94-rfvgr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:4edf7a23d172c4a133480d60844e80fb843a6aaf50a68d8d5fec13ae0c3c03d7" already present on machine

openstack

kubelet

swift-proxy-7b675b8b94-rfvgr

Started

Started container proxy-httpd

openstack

job-controller

ironic-inspector-db-sync

Completed

Job completed

openstack

kubelet

swift-proxy-7b675b8b94-rfvgr

Started

Started container proxy-server

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled down replica set neutron-f49f69884 to 0 from 1
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

metallb-controller

ironic-inspector-internal

IPAllocated

Assigned IP ["172.20.1.80"]
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

replicaset-controller

neutron-f49f69884

SuccessfulDelete

Deleted pod: neutron-f49f69884-v8xz2
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

job-controller

nova-cell0-db-create

SuccessfulCreate

Created pod: nova-cell0-db-create-64285

openstack

job-controller

nova-cell1-db-create

SuccessfulCreate

Created pod: nova-cell1-db-create-26xt5

openstack

kubelet

neutron-f49f69884-v8xz2

Killing

Stopping container neutron-api

openstack

job-controller

nova-cell0-0300-account-create-update

SuccessfulCreate

Created pod: nova-cell0-0300-account-create-update-b66m5

openstack

kubelet

neutron-f49f69884-v8xz2

Killing

Stopping container neutron-httpd

openstack

job-controller

nova-api-8a73-account-create-update

SuccessfulCreate

Created pod: nova-api-8a73-account-create-update-s57x2

openstack

replicaset-controller

dnsmasq-dns-7754f44b87

SuccessfulCreate

Created pod: dnsmasq-dns-7754f44b87-jrdnd

openstack

job-controller

nova-api-db-create

SuccessfulCreate

Created pod: nova-api-db-create-94ssk

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-inspector-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ironic-inspector-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ironic-inspector-internal-svc

Requested

Created new CertificateRequest resource "ironic-inspector-internal-svc-1"

openstack

cert-manager-certificates-key-manager

ironic-inspector-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-rfmqb"

openstack

job-controller

nova-cell1-2b75-account-create-update

SuccessfulCreate

Created pod: nova-cell1-2b75-account-create-update-gqckp

openstack

cert-manager-certificates-trigger

ironic-inspector-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-route

Requested

Created new CertificateRequest resource "ironic-inspector-public-route-1"

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-route

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-route-9nhqf"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ironic-inspector-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

ironic-inspector-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

ironic-inspector-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-svc

Requested

Created new CertificateRequest resource "ironic-inspector-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ironic-inspector-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-svc-bxfnd"

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

ironic-inspector

SuccessfulDelete

delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

metallb-speaker

swift-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

glance-213eb-default-external-api-0

Killing

Stopping container glance-log

openstack

kubelet

glance-213eb-default-external-api-0

Killing

Stopping container glance-httpd
(x2)

openstack

statefulset-controller

glance-213eb-default-external-api

SuccessfulDelete

delete Pod glance-213eb-default-external-api-0 in StatefulSet glance-213eb-default-external-api successful
(x2)

openstack

statefulset-controller

glance-213eb-default-internal-api

SuccessfulDelete

delete Pod glance-213eb-default-internal-api-0 in StatefulSet glance-213eb-default-internal-api successful

openstack

kubelet

glance-213eb-default-internal-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-213eb-default-internal-api-0

Killing

Stopping container glance-log

openstack

replicaset-controller

placement-6cc7544794

SuccessfulDelete

Deleted pod: placement-6cc7544794-vmcq4

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled down replica set placement-6cc7544794 to 0 from 1

openstack

kubelet

placement-6cc7544794-vmcq4

Killing

Stopping container placement-log

openstack

kubelet

placement-6cc7544794-vmcq4

Killing

Stopping container placement-api

openstack

kubelet

openstackclient

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b0a455c844ca790160c48aed1aaf8bc69ceb4b9ed4a4fa1717114e6e2e2fda9" in 20.56s (20.56s including waiting). Image size: 594485614 bytes.
(x2)

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:94ca191a8eb2bb9060456016091883ff81af6060e5e63191bca57f60f0f48f77" already present on machine

openstack

kubelet

openstackclient

Started

Started container openstackclient

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-python-agent-init
(x4)

openstack

metallb-speaker

neutron-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:0dea7fa3ede253c003e819495c9c017380d89d6827460d7d98ec4d6447440797" in 28.368s (28.368s including waiting). Image size: 785155373 bytes.

openstack

kubelet

openstackclient

Created

Created container: openstackclient
(x3)

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

Created

Created container: ironic-neutron-agent
(x3)

openstack

kubelet

ironic-neutron-agent-89874fdc8-kjtzj

Started

Started container ironic-neutron-agent

openstack

kubelet

nova-cell1-db-create-26xt5

Started

Started container mariadb-database-create

openstack

multus

nova-cell0-0300-account-create-update-b66m5

AddedInterface

Add eth0 [10.128.0.247/23] from ovn-kubernetes

openstack

multus

nova-cell0-db-create-64285

AddedInterface

Add eth0 [10.128.0.244/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-db-create-64285

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.242/23] from ovn-kubernetes

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

multus

nova-cell1-2b75-account-create-update-gqckp

AddedInterface

Add eth0 [10.128.0.248/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-2b75-account-create-update-gqckp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

nova-api-8a73-account-create-update-s57x2

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

multus

nova-api-8a73-account-create-update-s57x2

AddedInterface

Add eth0 [10.128.0.245/23] from ovn-kubernetes

openstack

multus

nova-api-db-create-94ssk

AddedInterface

Add eth0 [10.128.0.243/23] from ovn-kubernetes

openstack

kubelet

nova-api-db-create-94ssk

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:0dea7fa3ede253c003e819495c9c017380d89d6827460d7d98ec4d6447440797" already present on machine

openstack

kubelet

nova-cell1-db-create-26xt5

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell1-db-create-26xt5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

multus

nova-cell1-db-create-26xt5

AddedInterface

Add eth0 [10.128.0.246/23] from ovn-kubernetes

openstack

multus

dnsmasq-dns-7754f44b87-jrdnd

AddedInterface

Add eth0 [10.128.0.241/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

nova-cell0-0300-account-create-update-b66m5

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa" already present on machine

openstack

kubelet

nova-api-db-create-94ssk

Created

Created container: mariadb-database-create
(x5)

openstack

metallb-speaker

placement-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

nova-api-db-create-94ssk

Started

Started container mariadb-database-create

openstack

kubelet

nova-cell1-2b75-account-create-update-gqckp

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-api-8a73-account-create-update-s57x2

Started

Started container mariadb-account-create-update

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

nova-api-8a73-account-create-update-s57x2

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-cell0-db-create-64285

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell0-db-create-64285

Started

Started container mariadb-database-create

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Started

Started container init
(x3)

openstack

statefulset-controller

glance-213eb-default-external-api

SuccessfulCreate

create Pod glance-213eb-default-external-api-0 in StatefulSet glance-213eb-default-external-api successful

openstack

kubelet

nova-cell1-2b75-account-create-update-gqckp

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-cell0-0300-account-create-update-b66m5

Started

Started container mariadb-account-create-update

openstack

kubelet

nova-cell0-0300-account-create-update-b66m5

Created

Created container: mariadb-account-create-update

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Created

Created container: dnsmasq-dns
(x3)

openstack

statefulset-controller

glance-213eb-default-internal-api

SuccessfulCreate

create Pod glance-213eb-default-internal-api-0 in StatefulSet glance-213eb-default-internal-api successful
(x2)

openstack

statefulset-controller

ironic-inspector

SuccessfulCreate

create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.251/23] from ovn-kubernetes

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:0dea7fa3ede253c003e819495c9c017380d89d6827460d7d98ec4d6447440797" already present on machine

openstack

multus

glance-213eb-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.250/23] from ovn-kubernetes

openstack

multus

glance-213eb-default-external-api-0

AddedInterface

Add eth0 [10.128.0.249/23] from ovn-kubernetes

openstack

multus

glance-213eb-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

glance-213eb-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

kubelet

ironic-inspector-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:5ebcf9bc3c064a788777232f143300a64def982588dbcff6a82e779cfacc28c4"

openstack

job-controller

nova-cell0-db-create

Completed

Job completed

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

job-controller

nova-api-db-create

Completed

Job completed

openstack

kubelet

glance-213eb-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

glance-213eb-default-external-api-0

Created

Created container: glance-httpd

openstack

job-controller

nova-cell1-db-create

Completed

Job completed

openstack

kubelet

glance-213eb-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:5ebcf9bc3c064a788777232f143300a64def982588dbcff6a82e779cfacc28c4"

openstack

kubelet

glance-213eb-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

multus

glance-213eb-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

kubelet

glance-213eb-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

glance-213eb-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

kubelet

glance-213eb-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-213eb-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" already present on machine

openstack

kubelet

glance-213eb-default-external-api-0

Started

Started container glance-httpd

openstack

job-controller

nova-cell1-2b75-account-create-update

Completed

Job completed

openstack

job-controller

nova-cell0-0300-account-create-update

Completed

Job completed

openstack

kubelet

glance-213eb-default-internal-api-0

Created

Created container: glance-httpd

openstack

job-controller

nova-api-8a73-account-create-update

Completed

Job completed

openstack

kubelet

glance-213eb-default-internal-api-0

Started

Started container glance-httpd

openstack

job-controller

nova-cell0-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell0-conductor-db-sync-9xm4p

openstack

multus

nova-cell0-conductor-db-sync-9xm4p

AddedInterface

Add eth0 [10.128.0.252/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-conductor-db-sync-9xm4p

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e"

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:5ebcf9bc3c064a788777232f143300a64def982588dbcff6a82e779cfacc28c4" in 4.502s (4.502s including waiting). Image size: 657221885 bytes.

openstack

kubelet

ironic-inspector-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:5ebcf9bc3c064a788777232f143300a64def982588dbcff6a82e779cfacc28c4" in 4.506s (4.506s including waiting). Image size: 657221885 bytes.

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

ironic-conductor-0

Started

Started container pxe-init

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-conductor-0

Created

Created container: pxe-init

openstack

replicaset-controller

dnsmasq-dns-699fc4cfdf

SuccessfulDelete

Deleted pod: dnsmasq-dns-699fc4cfdf-cmxnl

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:ca987f85e8c33f1a9dd33dd0e8422facaf1f38aa1fa48ce8357ff541645e74db" already present on machine

openstack

kubelet

dnsmasq-dns-699fc4cfdf-cmxnl

Killing

Stopping container dnsmasq-dns

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:ca987f85e8c33f1a9dd33dd0e8422facaf1f38aa1fa48ce8357ff541645e74db" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:5ebcf9bc3c064a788777232f143300a64def982588dbcff6a82e779cfacc28c4" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-httpboot

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:ca987f85e8c33f1a9dd33dd0e8422facaf1f38aa1fa48ce8357ff541645e74db" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-httpboot
(x3)

openstack

metallb-speaker

glance-default-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

nova-cell0-conductor-db-sync-9xm4p

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e" in 13.712s (13.712s including waiting). Image size: 668208107 bytes.

openstack

kubelet

ironic-inspector-0

Created

Created container: ramdisk-logs

openstack

kubelet

ironic-inspector-0

Started

Started container ramdisk-logs

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:ca987f85e8c33f1a9dd33dd0e8422facaf1f38aa1fa48ce8357ff541645e74db" already present on machine

openstack

kubelet

nova-cell0-conductor-db-sync-9xm4p

Created

Created container: nova-cell0-conductor-db-sync

openstack

kubelet

nova-cell0-conductor-db-sync-9xm4p

Started

Started container nova-cell0-conductor-db-sync

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-dnsmasq

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-dnsmasq

openstack

metallb-speaker

ironic-inspector-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

nova-cell0-conductor-db-sync

Completed

Job completed

openstack

statefulset-controller

nova-cell0-conductor

SuccessfulCreate

create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful

openstack

multus

nova-cell0-conductor-0

AddedInterface

Add eth0 [10.128.0.253/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-conductor-0

Started

Started container nova-cell0-conductor-conductor

openstack

kubelet

nova-cell0-conductor-0

Created

Created container: nova-cell0-conductor-conductor

openstack

kubelet

nova-cell0-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e" already present on machine

openstack

statefulset-controller

nova-cell1-compute-ironic-compute

SuccessfulCreate

create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful

openstack

replicaset-controller

dnsmasq-dns-8459745b77

SuccessfulCreate

Created pod: dnsmasq-dns-8459745b77-pkh7k

default

endpoint-controller

nova-metadata-internal

FailedToCreateEndpoint

Failed to create endpoint for service openstack/nova-metadata-internal: endpoints "nova-metadata-internal" already exists
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

job-controller

nova-cell0-cell-mapping

SuccessfulCreate

Created pod: nova-cell0-cell-mapping-c5sqt
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

metallb-controller

nova-metadata-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificates-trigger

nova-metadata-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.0/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

nova-metadata-internal-svc

Requested

Created new CertificateRequest resource "nova-metadata-internal-svc-1"

openstack

cert-manager-certificates-key-manager

nova-metadata-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-metadata-internal-svc-5cjfc"

openstack

kubelet

nova-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:c9b4945c93b9e450adbdb035f59d0f911b9e9c22b2ad694c58e37c43e3e8d697"

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-metadata-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.1/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c"

openstack

cert-manager-certificaterequests-issuer-venafi

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-cell0-cell-mapping-c5sqt

AddedInterface

Add eth0 [10.128.0.254/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-cell-mapping-c5sqt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e" already present on machine

openstack

cert-manager-certificates-issuing

nova-metadata-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-cell0-cell-mapping-c5sqt

Started

Started container nova-manage

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

multus

nova-cell1-compute-ironic-compute-0

AddedInterface

Add eth0 [10.128.0.255/23] from ovn-kubernetes

openstack

job-controller

nova-cell1-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell1-conductor-db-sync-2rz24

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.3/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

kubelet

nova-cell1-novncproxy-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:8962932efdb727ae0430f64b608878a1b93179b46dc26fd4f5d38c8eccc00f5d"

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.2/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-svc

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-bd6wx"

openstack

kubelet

nova-cell0-cell-mapping-c5sqt

Created

Created container: nova-manage

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-svc

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1"

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-svc

Issuing

The certificate has been successfully issued

openstack

multus

dnsmasq-dns-8459745b77-pkh7k

AddedInterface

Add eth0 [10.128.1.4/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-8459745b77-pkh7k

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

kubelet

dnsmasq-dns-8459745b77-pkh7k

Created

Created container: init

openstack

kubelet

dnsmasq-dns-8459745b77-pkh7k

Started

Started container init

openstack

kubelet

nova-api-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c"

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355"

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-route

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1"

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-vencrypt

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-bh7hr"

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-route

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-7m676"

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-vencrypt

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

nova-cell1-conductor-db-sync-2rz24

AddedInterface

Add eth0 [10.128.1.5/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-8459745b77-pkh7k

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-vencrypt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-cell1-conductor-db-sync-2rz24

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e" already present on machine

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulDelete

delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-vencrypt

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-vencrypt

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1"

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-cell1-conductor-db-sync-2rz24

Created

Created container: nova-cell1-conductor-db-sync

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:8962932efdb727ae0430f64b608878a1b93179b46dc26fd4f5d38c8eccc00f5d" in 2.798s (2.798s including waiting). Image size: 670568433 bytes.

openstack

kubelet

nova-cell1-conductor-db-sync-2rz24

Started

Started container nova-cell1-conductor-db-sync

openstack

kubelet

dnsmasq-dns-8459745b77-pkh7k

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-8459745b77-pkh7k

Created

Created container: dnsmasq-dns

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:7dc1464a072b28d5bc1e10127f898f61e85cffb63a67a51a15fd01322da295fa" already present on machine

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" in 3.345s (3.345s including waiting). Image size: 685002983 bytes.

openstack

kubelet

nova-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:c9b4945c93b9e450adbdb035f59d0f911b9e9c22b2ad694c58e37c43e3e8d697" in 3.359s (3.359s including waiting). Image size: 668208104 bytes.

openstack

kubelet

nova-api-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" in 3.028s (3.028s including waiting). Image size: 685002983 bytes.

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

ironic-conductor-0

Started

Started container httpboot

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-conductor

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-cell1-novncproxy-0

Killing

Stopping container nova-cell1-novncproxy-novncproxy

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-conductor

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:5ebcf9bc3c064a788777232f143300a64def982588dbcff6a82e779cfacc28c4" already present on machine

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

ironic-conductor-0

Created

Created container: httpboot

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:5ebcf9bc3c064a788777232f143300a64def982588dbcff6a82e779cfacc28c4" already present on machine

openstack

kubelet

ironic-conductor-0

Created

Created container: dnsmasq

openstack

kubelet

ironic-conductor-0

Started

Started container dnsmasq

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.6/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-7754f44b87

SuccessfulDelete

Deleted pod: dnsmasq-dns-7754f44b87-jrdnd

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.2:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.2:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

dnsmasq-dns-7754f44b87-jrdnd

Unhealthy

Readiness probe failed: dial tcp 10.128.0.241:5353: connect: connection refused

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355" in 14.771s (14.771s including waiting). Image size: 1216409983 bytes.

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Created

Created container: nova-cell1-compute-ironic-compute-compute

openstack

job-controller

nova-cell0-cell-mapping

Completed

Job completed

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Started

Started container nova-cell1-compute-ironic-compute-compute

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

job-controller

nova-cell1-conductor-db-sync

Completed

Job completed

openstack

statefulset-controller

nova-cell1-conductor

SuccessfulCreate

create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful

openstack

kubelet

nova-cell1-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e" already present on machine

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.8/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openstack

multus

nova-cell1-conductor-0

AddedInterface

Add eth0 [10.128.1.7/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-cell1-conductor-0

Created

Created container: nova-cell1-conductor-conductor

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-cell1-conductor-0

Started

Started container nova-cell1-conductor-conductor

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.9/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:c9b4945c93b9e450adbdb035f59d0f911b9e9c22b2ad694c58e37c43e3e8d697" already present on machine

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.10/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.8:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.8:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.9:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.9:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulCreate

create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:8962932efdb727ae0430f64b608878a1b93179b46dc26fd4f5d38c8eccc00f5d" already present on machine

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.11/23] from ovn-kubernetes

openstack

replicaset-controller

dnsmasq-dns-5cc8bb4897

SuccessfulCreate

Created pod: dnsmasq-dns-5cc8bb4897-sws9x

openstack

metallb-controller

nova-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

kubelet

dnsmasq-dns-5cc8bb4897-sws9x

Created

Created container: init

openstack

cert-manager-certificaterequests-approver

nova-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-trigger

nova-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

nova-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-internal-svc-bt5kp"

openstack

cert-manager-certificates-request-manager

nova-internal-svc

Requested

Created new CertificateRequest resource "nova-internal-svc-1"

openstack

cert-manager-certificates-issuing

nova-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5cc8bb4897-sws9x

Started

Started container init

openstack

kubelet

dnsmasq-dns-5cc8bb4897-sws9x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

multus

dnsmasq-dns-5cc8bb4897-sws9x

AddedInterface

Add eth0 [10.128.1.12/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

nova-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

nova-public-svc

Requested

Created new CertificateRequest resource "nova-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

dnsmasq-dns-5cc8bb4897-sws9x

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-5cc8bb4897-sws9x

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-5cc8bb4897-sws9x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-public-svc

Generated

Stored new private key in temporary Secret resource "nova-public-svc-d2j4k"

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

nova-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-public-route

Requested

Created new CertificateRequest resource "nova-public-route-1"

openstack

cert-manager-certificates-key-manager

nova-public-route

Generated

Stored new private key in temporary Secret resource "nova-public-route-blmxj"

openstack

cert-manager-certificates-trigger

nova-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

job-controller

nova-cell1-cell-mapping

SuccessfulCreate

Created pod: nova-cell1-cell-mapping-8cwkr

openstack

job-controller

nova-cell1-host-discover

SuccessfulCreate

Created pod: nova-cell1-host-discover-8g65x

openstack

multus

nova-cell1-host-discover-8g65x

AddedInterface

Add eth0 [10.128.1.14/23] from ovn-kubernetes

openstack

multus

nova-cell1-cell-mapping-8cwkr

AddedInterface

Add eth0 [10.128.1.13/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-host-discover-8g65x

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e" already present on machine

openstack

kubelet

nova-cell1-cell-mapping-8cwkr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:3eef424a6a774edf70efa63eb6b5c669b23d9cc047c1dd9d18636b549206459e" already present on machine

openstack

kubelet

nova-cell1-cell-mapping-8cwkr

Started

Started container nova-manage

openstack

kubelet

nova-cell1-cell-mapping-8cwkr

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-host-discover-8g65x

Started

Started container nova-manage

openstack

kubelet

nova-cell1-host-discover-8g65x

Created

Created container: nova-manage

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.15/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-api
(x24)

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

(combined from similar events): Scaled down replica set dnsmasq-dns-8459745b77 to 0 from 1

openstack

replicaset-controller

dnsmasq-dns-8459745b77

SuccessfulDelete

Deleted pod: dnsmasq-dns-8459745b77-pkh7k

openstack

job-controller

nova-cell1-host-discover

Completed

Job completed

openstack

kubelet

dnsmasq-dns-8459745b77-pkh7k

Killing

Stopping container dnsmasq-dns

openstack

job-controller

nova-cell1-cell-mapping

Completed

Job completed

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler
(x2)

openstack

statefulset-controller

nova-scheduler

SuccessfulDelete

delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful
(x3)

openstack

statefulset-controller

nova-metadata

SuccessfulDelete

delete Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata
(x3)

openstack

statefulset-controller

nova-api

SuccessfulDelete

delete Pod nova-api-0 in StatefulSet nova-api successful

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-scheduler-0

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.16/23] from ovn-kubernetes
(x4)

openstack

statefulset-controller

nova-api

SuccessfulCreate

create Pod nova-api-0 in StatefulSet nova-api successful

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.8:8775/": read tcp 10.128.0.2:45006->10.128.1.8:8775: read: connection reset by peer

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.8:8775/": read tcp 10.128.0.2:45012->10.128.1.8:8775: read: connection reset by peer
(x4)

openstack

statefulset-controller

nova-metadata

SuccessfulCreate

create Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.17/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c7906ad2cf5a1467684690a7612c36700d2c98ac2398ac588cb297b33a7f609c" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata
(x3)

openstack

statefulset-controller

nova-scheduler

SuccessfulCreate

create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:c9b4945c93b9e450adbdb035f59d0f911b9e9c22b2ad694c58e37c43e3e8d697" already present on machine

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.18/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.16:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.16:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.17:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.17:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x3)

openstack

metallb-speaker

nova-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

metallb-speaker

nova-metadata-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

sushy-emulator

replicaset-controller

sushy-emulator-78f6d7d749

SuccessfulDelete

Deleted pod: sushy-emulator-78f6d7d749-xgc79

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled down replica set sushy-emulator-78f6d7d749 to 0 from 1

sushy-emulator

kubelet

sushy-emulator-78f6d7d749-xgc79

Killing

Stopping container sushy-emulator

sushy-emulator

replicaset-controller

sushy-emulator-84965d5d88

SuccessfulCreate

Created pod: sushy-emulator-84965d5d88-9qs5d

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-84965d5d88 to 1

sushy-emulator

kubelet

sushy-emulator-84965d5d88-9qs5d

Started

Started container sushy-emulator

sushy-emulator

kubelet

sushy-emulator-84965d5d88-9qs5d

Created

Created container: sushy-emulator

sushy-emulator

kubelet

sushy-emulator-84965d5d88-9qs5d

Pulled

Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1771585490" already present on machine

sushy-emulator

multus

sushy-emulator-84965d5d88-9qs5d

AddedInterface

Add ironic [172.20.1.71/24] from sushy-emulator/ironic

sushy-emulator

multus

sushy-emulator-84965d5d88-9qs5d

AddedInterface

Add eth0 [10.128.1.19/23] from ovn-kubernetes
(x11)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-nodes of Type *v1.Service
(x12)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-nodes of Type *v1.Service

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

multus

keystone-cron-29548681-5hg8p

AddedInterface

Add eth0 [10.128.1.20/23] from ovn-kubernetes

openstack

cronjob-controller

keystone-cron

SuccessfulCreate

Created job keystone-cron-29548681

openstack

job-controller

keystone-cron-29548681

SuccessfulCreate

Created pod: keystone-cron-29548681-5hg8p

openstack

kubelet

keystone-cron-29548681-5hg8p

Started

Started container keystone-cron

openstack

kubelet

keystone-cron-29548681-5hg8p

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:0eeb2759adca98fed8913fe00b0a87d706bde89efff3b5ef6d962bc3ca5204b0" already present on machine

openstack

kubelet

keystone-cron-29548681-5hg8p

Created

Created container: keystone-cron

openstack

job-controller

keystone-cron-29548681

Completed

Job completed

openstack

cronjob-controller

keystone-cron

SawCompletedJob

Saw completed job: keystone-cron-29548681, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-rnm26 namespace